automated ec2 python daily web crawl & scrape. starts & stops ec2 programatically. aurora db.

Dibatalkan Disiarkan 4 tahun lepas Dibayar semasa penghantaran
Dibatalkan Dibayar semasa penghantaran

please read this in detail. answer the 11 questions below fully and clearly. do not say anything else. we've been getting a lot of spam and need your response to be as concise as possible.

responses that violate this request will be ignored.

we need a service written in python to crawl through urls on two web domains, extracting data found in json objects on the default page source.

this needs to crawl thousands of instances of around 8 unique web pages.

data will be saved in a db with about 5 tables with about 10 columns each.

this needs to be completed each day. a scheduler should start an ec2 instance and the code should begin executing. when the crawl is finished for the day, the ec2 instance should be terminated.

also, if the IP address ever gets blocked by the website being crawled, then that ec2 instance should be shut down, and a new one started (with a unique IP address)

all required data is held in the page source accessible with a simple curl or GET of a url. no clicking is necessary for this web scraping project.

QUESTIONS - YOU MUST ANSWER ALL. please number your answers for clarity

1. we need to use the aws serverless sql-based db. what is it called?

2. how would you start the ec2 instances automatically each day?

3. how would you terminate the ec2 instances when the crawl was completed?

4. visit [login to view URL] -- the name, location, date, price, and age limit for this event can all be found in a single json object in the html returned by this url. what is this json value? copy and paste this entire json object in your response.

5. how would you programmatically extract this json from the url?

6. how would you programmatically extract this json from the url if there were multiple similar json objects on the page?

7. when would you use dynamodb instead of aurora?

8. what is clean code?

9. what is dry code?

10. how long would this project take you?

11. how much $ would you require for this?

Python Amazon Web Services Linux Kejuruteraan Perisian

ID Projek: #20853546

Tentang projek

10 cadangan Projek jarak jauh Aktif 4 tahun lepas

10 pekerja bebas membida secara purata $607 untuk pekerjaan ini

dreamci

Hi there Me and my team can deliver your tasks with great quality We are focused on Web Development and created many beautiful sites, mostly in Python. We like to use Laravel as REST api and Vuejs as SPA for new app Lagi

$500 USD dalam 5 hari
(95 Ulasan)
8.6
DevStar925

Hi, How are you? I am very interested in your project and I have read your descriptions carefully. I can answer to you. As you can see from my profile, I have enough experience on linux, scrap, crawl and etc. but I wa Lagi

$500 USD dalam 7 hari
(73 Ulasan)
7.5
pixelonline

Hi There, a. We can develop the python program you want us to code for you. b. Please check our reply for the questions you have asked. 1. aurora db 2. using lambda 3. lambda 4. ,5, 6-using Python with django 7. d Lagi

$750 USD dalam 7 hari
(9 Ulasan)
6.4
karthikbalu7

1. we need to use the aws serverless sql-based db. what is it called? You want to use AWS lambda & RDS service with Nodejs/python, we can use server less framework for this and great experience 2. how would you start Lagi

$700 USD dalam 7 hari
(8 Ulasan)
6.1
trulsnyberg

Nice to meet you I am an Amazon Cloud Architect for the web infrastructure serving 90 million page impressions and 12 TB Internet traffic per month. The AWS services I use are EC2, ELB, MySQL RDS, VPC, CloudFront, Elas Lagi

$637 USD dalam 9 hari
(2 Ulasan)
2.4
BrancoSoft

Hi There, I am writing in response to your post for "automated ec2 python daily web crawl & scrape. starts & stops ec2 programatically. aurora db.."After carefully reviewing the description I feel that I am a suitable Lagi

$500 USD dalam 35 hari
(0 Ulasan)
0.0