1. I must be able to set the starting URL from which the spider will intitiate from on the [url removed, login to view] or [url removed, login to view] websites. The [url removed, login to view] website is powered by [url removed, login to view] and is in the same format but I find the navigation to find various categories easier with 411.com.
For example I might paste the following URL into the spider utility: [url removed, login to view];C=jewelers&R=N&STYPE=S&MC=1&OO=1&F=1&CP=Clothing+%26+Accessories%5EJewelry%5EJewelers%5E
2. Once the starting URL has been entered, the spider must parse the HTML and extract the business name, city, state, zipcode, telephone number, fax number (if applicable), email address (if applicable), and website (if applicable) into a CSV formatted text file.
3. Spider must crawl through each of the pages until the final page for that category is completed. However, at the very beginning of most categories, there are businesses listed under the "Yellow Pages - Advertisers" heading. These are businesses that are not from the area that I have chosen (for example I chose Alaska and they are from California, etc.) but are advertising in that area. I do not want these entries included. I would want the ones that start under the "Yellow Pages - Listings" heading. The spider does not neccessarily need to know how my list was created, only to avoid entries under the "Advertisers" section.
4. When completed, an update function that lets me name a new file to save the data to or lets me choose an exisiting .CSV file to append the new data to.
5. Search and purge function that can be run anytime on any of the .CSV files that have been created to ensure no two entires have the same telephone number in a specific .CSV file. If duplicates telephone numbers are found, records with the least information are automatically deleted. For example, 2 records with the same telephone numbers but one lists a fax and the other doesn't, then delete the one without the fax number.
6. Merge function that can be run any time and lets me pick 2 or more .CSV created files and merge them into one new file. If more than 2 files is a problem, I can live with 2 and merge a few times to create one file.
7. Finally, I will provide you with 2 URL's which will represent 2 different yellow page categories on the [url removed, login to view] website and you will run the completed program and email me (or make available to download), 2 .CSV files with the completed and duplicate purged files.
1. You will be easily contacted. Either by phone, or you will be required to answer any e-mail I send to you within 10 hours time.
2. Must speak and write english well.
3. Code must be well commented in english.
4. All source code must be given to me.
5. I would prefer if this was written in Java.
6. I would like this done by no later than March 10th, 2006.
7. Must be able to run on my Pentium III with Windows XP. I am in a very rural area and only have dial up. I am running Java 2 Platform Standard Edition Version 1.5.0 (build 1.5.0_04-b05)
8. Delivery of files will be via email for sure and possibly by FTP.
9 pekerja bebas membida secara purata $220 untuk pekerjaan ini
i work with this kind of parsing many times. Opensource HTML parser will be used. All things can be negotiatable. in case of questions, mail me at tdminh81[at][url removed, login to view]
Hello, We are experienced in java and crawler development. We alreayd developed crawler to crawl [url removed, login to view], [url removed, login to view], [url removed, login to view], [url removed, login to view] in java. We are interested to do this for you as it matches our expe Lagi