Essentially I am paying a subscription to [login to view URL] just to access a "repair manual" for my vehicle.
What I need to do is find a way to scrape and save the pages offline by following links. Sounds easy, right? Well, it's not quite as simple!
I tried to scrape this via httrack and wget but unfortunately they have a few things preventing this:
2. The website uses frames (eugh) meaning it has 2 frames present on the screen: left (navigation) | right (content) - this also trips up httrack and similar tools.
So essentially what I want is to create a fully offline copy of just the content for my individual vehicle.
I've included a screenshot of the webpage I want scraped and saved for offline use. The scraping process should follow all links in the left panel (actually a <frame>) and save all content from the left panel (also a <frame>.)
30 pekerja bebas membida secara purata $172 untuk pekerjaan ini
Hello. How do you want to save scraped content? Like html files or save in some another way? I can make it using selenium. Feel free to contact me and discuss more details Thanks, best regards. Denis