I need a python script that does the following:
- scrape an unspecified number of websites simultaneously by using their URLs. (I have more than 10,000 URLs, therefore I need you to utilize multiprocessing)
- The content needed in the scraper is:
1- website textual content (not HTML I need the text) up to level\depth (n)
3- textual content of downloadable files such as PDF and word documents
You can utilize beautifulsoup, or any tool you like as it is efficient and does the job as specified.