Create a multi-threaded PHP script that will scrape data from search engines and record the scraped data into a sql database. Also create a Proxy scraper / tester.
Forum / Board url scraper
1. The script will run on a dedicated Linux server.
2. The Script will include a web browser type gui for total user control.
3. The Script will include updated stats as the scraper is progressing.
4. The Script will upload a compressed file of search term files and decompress and loop thru said search term files. The search term files will contain search terms 1 term per line.
5. The Script will have the ability to search Google, Google Blogsearch, Yahoo, MSN & Boardreader with the ability to add engines in the future.
6. The Script will have the ability to use proxies (see PROXIES below)
7. The script will include a gui to enable user to chose proxy settings for each search engine. Proxy settings will be proxy timeout time in seconds & whether or not to use proxies.
8. The script needs to allow user to choose the ips to broadcast to the search engine in the event no proxies are used.
9. The script needs to allow the user to set the thread timeout for each search to avoid proxy banning by the search engines.
10. The script needs to include error handling for captchas. If a Captcha image is encountered the script needs to choose a new proxy and try again with the same keyword.
11. Running stats for total urls found
12. Running stats for keyword list
Example keyword 238 of 2,000
13. Running stats for keyword file
Example: file 57 of 400
14. Running stats of running time and time to completion
15. Pause Resume & Stop Functions
1. The proxy scraper will scrape urls from google's cache based on urls input via a text file.
2. Visit the google cache links and retrieve both the google "text only" cache for these links obtained by adding "&strip1" to the url and the actual link to the site with the proxies.
3. Provide the choice of downloading the google cache proxy links the regular proxy links or both.
4. Provide the option to use proxies when downloading the google cache proxy links and use proper proxy error handling as outlined above.
1. Query an environment script to check for anonymity
2. Options to set connect timeout when testing proxies.
3. Option to set time to retest proxies.
4. Create seperate local files for each search engine that encounters
a captcha (see above).
proxy [url removed, login to view] encounters a captcha while attempting to
scrape from google that proxy should then be placed in the google's
bad proxy text file as to not be used again with google. There would
be a seperate file for all engines.
5. The ability to set a time to reset the files in step 4.
6. Reset the files in step 4 whenever the program is closed (not paused).
To be coded entirely in PHP
C/C++ should be used where PHP is not possible/plausible
The Anatomy of a search:
1. Queries engine 1 for dog.
2. Collect applicable url on landing page.
3. Press the next button (if available) and collect all urls.
4. Repeat step 3 until not available.
5. Remove duplicates from applicable urls
a) To do this it would query the good url db and the timeout url db
6. Connect to all urls collected (without the use of proxies after removing duplicates and again scrape all applicable urls.
7. Place timeout urls in the timeout db and good urls in the good url db. (the timeout for connecting to sites is set by user in the settings).
8. Have to ability to retest timeout urls as a separate process.
In addition it should collect the following information and place into the DB.
1. Google PR of the actual full URL (not just the domain name).
2. Age of the domain name.
3. Number of back links that URL has going to it
4. Meta-Tag keyword
a) create an entry to the the db for every meta tag keyword associate with the url.
Example: Keyword = dog Meta-Tags = dogs, dog toys, dog food, dog treats", the scraper would make 4 'entries' into the sql database, and record that URL 4 times. So that when I query the database for 'dog food', or for 'dog treats', that same URL would be returned, along with any other URL that also had dog food or dog treats in its meta tag keywords. If a URL has no meta tag keywords, then obviously there is nothing to record for that. In this instance, if a URL has no meta tags, I still want all available data saved into the database
5. Date URL was scraped
All 5 variables must be searchable in the browser.
Search for URLS with a Google PR of 3 or higher, with domains that are 1 year old or older, with 1,000 back links or more, with keyword 'dog toys' in meta tag keywords, scraped within the past 90 days.
The interfaces should have the ability to choose any or none of these terms and the ability to export the URLs returned. It should also have the ability to get urls from the database based on a count, all or by keyword.
The duplicates db should contain all applicable urls scraped to include the timeout urls. while the scraped url db itself will keep the entire url the Duplicates db should only keep the information
between http:// and the first occurrence of a forward slash "/".
Scraped URL= [url removed, login to view]
Saved to duplicates db = [url removed, login to view]
Save to Scraped url db = [url removed, login to view]
If C/C++ is also used, I will receive the source code for it as well. You will not be able to sell this script in any format, nor parts of it, to any other party. I am not sure what this will cost, so I am keeping the price open, but I do have a budget. I want this scraper to be started on as soon as possible, and completed in a timely manner. If you have any questions, comments, or suggestions, please feel free to PM me. Thank you.
19 pekerja bebas membida secara purata $1082 untuk pekerjaan ini
Have a nice day, Your project requires a dedicated person how work before whit this kind of project, I develop a blogger generator and I know how to work whit elite proxies, regards.
Hi, We are an outsourcing firm with a panel of talented designers and coders. We have specialized in web development and internet marketing since 2003 and we have proven work delivery policies. Please see PMB.
I am a software engineer in reputed us mnc, i able to deliver u qualityful product on time. So u should provide this bid to me because i am fullfilling all ur requirements... Thanks