Your guide to getting data entry done for your business
Data entry is an important task, but choosing the wrong solution can seriously harm your company's productivity.
Data Extraction is the process of extracting data from a variety of sources for further analysis. A Data Extractor is someone who helps businesses and organizations gain insight from their data and create descriptive and predictive models. They specialize in finding patterns and relationships that guide decisions and uncover meaningful information. Through carefully crafted queries and processes, our Data Extractors can transform raw data into a useful format that can be used for reporting, analytics, machine learning and more.
Here's some projects that our expert Data Extractors made real:
When you partner with an experienced team of Freelancer's Data Extractors you can access valuable insights from your data that can guide decisions, uncover opportunities and create predictive models with new data sources. Our experts can help you unlock deeper insights with advanced filtering methods and complex coding. Explore the full range of possibilities with our talented community of professionals, capable of delivering comprehensive solutions tailored to your needs.
Ready to launch your very own project on Freelancer.com? We invite you to try us out and hire our experienced Data Extractors to make your design goals a reality. Let their creativity, skill, and proficiency bring something special to your project!
Daripada 128,047 ulasan, klien menilai Data Extractors 4.9 daripada 5 bintang.Data Extraction is the process of extracting data from a variety of sources for further analysis. A Data Extractor is someone who helps businesses and organizations gain insight from their data and create descriptive and predictive models. They specialize in finding patterns and relationships that guide decisions and uncover meaningful information. Through carefully crafted queries and processes, our Data Extractors can transform raw data into a useful format that can be used for reporting, analytics, machine learning and more.
Here's some projects that our expert Data Extractors made real:
When you partner with an experienced team of Freelancer's Data Extractors you can access valuable insights from your data that can guide decisions, uncover opportunities and create predictive models with new data sources. Our experts can help you unlock deeper insights with advanced filtering methods and complex coding. Explore the full range of possibilities with our talented community of professionals, capable of delivering comprehensive solutions tailored to your needs.
Ready to launch your very own project on Freelancer.com? We invite you to try us out and hire our experienced Data Extractors to make your design goals a reality. Let their creativity, skill, and proficiency bring something special to your project!
Daripada 128,047 ulasan, klien menilai Data Extractors 4.9 daripada 5 bintang.About the Project (Only Mumbai-based Freelancer, no Agency ) We are building an end-to-end Purchase Order (PO) to Invoice Automation System. Customer POs are received via email in PDF format and currently processed manually into Excel and ERP. We are implementing two synchronized Robotic Process Automation (RPA) bots to: Extract data from POs Validate and structure data Enter data into ERP Generate invoices Email invoices to customers Maintain audit logs The system must operate 24/7 with minimal human intervention. Role Overview We are seeking an experienced RPA Developer to design, develop, test, and deploy automation bots that streamline the order processing lifecycle. The ideal candidate has hands-on experience in PDF data extraction, ERP automation, exception handling, and produ...
I want to build an in-house AI system that lets my team drag-and-drop both KYC and Income documents and, in one click, receive a neatly formatted PDF report. The PDF must always include: • Eligibility check (based on the rules I will supply) • Salary details in a clear monthly breakdown table • Current obligations in a similar table • Pending documents list • Pending form details • Probable queries for the credit team All eligibility logic, standard document lists and a library of past queries will be provided so the model can be fine-tuned to our exact policies. Accuracy and consistency matter more than fancy UI; I simply need a reliable back-end that ingests scans or PDFs, extracts the data, applies the rules and returns a single consolidated...
Job Description We need a disciplined SQL/data engineer to produce a deterministic, re-runnable script-based build (lean MVP, not enterprise infra). Goal Create a single canonical, one-row-per-company dataset anchored on UEI, then enrich it using exact UEI matches (no name guessing). Inputs (provided) USAspending extract / source tables (L24M window) DSBS export (CSV) with UEI + owner contact fields SAM Active Entity extract (CSV) with UEI + phone + officer names Written Segment B & C rules (explicit) Deliverables 1. USAspending segmentation + canonical vendor table >Filter to last 24 months >Vendor-level aggregation → one row per UEI >Apply Segment B & C logic exactly as written >Deterministic CSV output >Basic QA checks: a. no duplicate UEIs in outp...
Subject: Expert in Data Extraction (Python/Excel) – Ready for Quick Turnaround ​Hi there, ​I am a Computer Science Engineer with strong expertise in Python and Data Processing, making me an ideal fit for your PDF data extraction project. I understand you need financial data accurately extracted from various invoices into a clean Excel format. ​Why choose me? ​Automation for Accuracy: I can use Python libraries (like Pandas and PyPDF2/Camelot) to ensure 100% accuracy and handle multi-page invoices efficiently. ​Filtering Logic: I will implement a custom script to automatically filter out non-financial pages, ensuring only relevant data reaches the final sheet. ​Quick Turnaround: I understand you need this done fast. I can start immediately and deliver a standardized, error-free Excel ...
I need thousands of mixed-format scanned documents turned into searchable Excel format. High accuracy is essential. Requirements: - All documents have consistent formatting - Must maintain high accuracy in data entry - Must connect a photo to each document line (as a column in the excel sheet) Ideal skills and experience: - OCR and Excel - Some coding experience will be necessary - Attention to detail and commitment to accuracy Looking forward to your bids!
I have a Gmail account where many incoming messages show only my own address in the “To” field, yet I know the sender included additional recipients. I want a reliable way to uncover every hidden address those messages actually reached—across my whole mailbox, not just one or two samples—and export them for later use. Here’s exactly what I need: • A repeat-able solution that scans all selected emails in my Gmail inbox and programmatically pulls every address found in the full header (including any Bcc or undisclosed recipients Google preserves). • Output as a clean, deduplicated list—CSV or plain text is fine. • The process must work on Windows-based machines; I use Windows 10. If you prefer a small cloud utility or a local Python/P...
I’m ready to retire our old GoldMine setup and have every piece of customer information living smoothly inside Zoho CRM. The move must include all contacts along with their names, phone numbers, email addresses, the notes and history attached to each record, plus every custom field we have built over the years. I’ll rely on you to: • Extract the selected data from GoldMine, map it accurately to Zoho’s structure (creating the same custom fields where needed), and import it without duplicates or data loss. • Preserve the relationships so that notes and historical interactions stay linked to the right contact. • Provide a brief mapping sheet and run a post-migration check with me to confirm everything landed in the right place. If you have a proven...
I have a collection of paper-based reports filled with numerical figures that I need captured accurately in a clean, well-structured Excel workbook. Your task is straightforward: read each sheet, key every number into the correct columns and rows I will provide, double-check totals as you go, and flag any values that are unclear or illegible so I can review them later. The final Excel file must mirror the original layout, preserve any subtotals or grand totals, and be free of transcription errors. I will scan and share the documents as high-resolution PDFs; you simply return the completed spreadsheet plus a brief note of any issues you encountered. Attention to detail and solid Excel skills are essential because these figures feed directly into our monthly reporting. If you have experien...
I need someone to set up my web app and iOS and Android apps with Apple oATH
Hey! I’m looking to hire an experienced developer to build a universal product-detail scraping pipeline that takes a product URL (any website) and returns a complete structured product record. This is not a “simple HTML parse.” Many target sites are React/Next/Vue, load content via XHR/GraphQL, hide details behind tabs/accordions/modals, and lazy-load images/PDFs. The solution needs to reliably extract everything a human can see on the page, plus the underlying data used to render it. What the scraper must do (high level) Given a product URL, the pipeline should: Load the page like a real user (handle cookies/overlays). Capture all content from multiple sources (DOM + network + interactions). Use GPT API strategically to increase accuracy (field mapping, variant ext...
Hello, We are looking for a "hacker-level" Python Scraper who specializes in cutting-edge data extraction techniques. We are specifically looking for an expert who can scrape a wide range of data, including emails, from , while efficiently fetching data while avoiding credit costs. {If you read this completely, start with mj.} If you have the technical skills to overcome these limitations and deliver high-volume results, then you are the expert we are looking for. Let's discuss the project in detail.
I need help collecting a clean, well-structured list of Twitter accounts that consistently post about AI and possibly category of AI (open source, ML, AI, general AI) Instead of handing you a fixed list, I’ll define the selection rules (for example: minimum follower count, specific AI-related keywords, recent activity, etc.) - min follower count 5000 and have alteast multiple posts with 100+ likes/ retweets. Once those criteria are agreed on, you’ll locate the matching profiles and extract two data points per account: • the public profile bio • the direct profile link (around 1M+ profiles) Please return everything in a single CSV file, one row per influencer. Feel free to use Python, Tweepy, Twitter API v2, ScraperAPI, or another reliable method—as long...
I have a single Instagram Reel that was publicly available for roughly a year before being removed or placed in archive. I saved every trace I could—direct links, full-length screen recordings, and the search-engine cache hits that still reference the post. What I now need is a technical reconstruction of its viewership data. Your objective is to extract and corroborate: • Number of views over time (ideally plotted or tabled) • Any available demographic clues about who watched it • Engagement rates the Reel achieved while live Because the original URL now returns a 404, I expect most of the intel will come from open-source techniques: exploring Web archives (Wayback Machine snapshots, Google cache, ), digging into any residual JSON, and cross-referencing with Ins...
Necesito automatizar la consulta de la siguiente página del SRI: Al ingresar un RUC o cédula debo obtener y guardar en un archivo JSON estos campos exactos: • Estado de Contribuyente • Razón social • Indicador de “Contribuyente fantasma” • Actividad económica principal Requerimientos técnicos: – El script debe funcionar bajo llamada, es decir, pueda ejecutarlo manualmente cada vez que lo necesite con una lista de RUCs como entrada. – La salida debe ser un json por cada Identificación consultada. – Incluye un breve README con instrucciones de instalación y uso. Criterios de aceptación: 2. El tiempo medio por consulta no debe exceder lo razonable para evitar bl...
I have two source spreadsheets that I need merged and enriched through automated scraping: • “File 1” – 170 k Spanish local businesses with emails • “File 2” – 65 k additional businesses with websites only Phase 1 – Email extraction Using a Python script and well-known libraries (requests, BeautifulSoup, Scrapy or similar), scan every site listed in File 2, capture all working email addresses you can locate, then append them to the corresponding rows so I can produce a unified “File 3”. Phase 2 – Offer harvesting Next, visit each live site in File 3. Where an offer, deal or promotion is publicly displayed, record the details in a fresh Excel sheet with these exact columns: Business ID | Business Name | Offer...
I have a stack of paper documents that contain both text and numerical information. I need every line carefully transcribed and organised into a clean, well-structured Excel spreadsheet. Please keep the original order of the records, label the columns clearly, and double-check totals or calculations so the numbers match exactly what appears on the page. Accuracy and legibility are more important to me than speed, but I would like regular updates so I can spot-check your work as we go. Once complete, I expect a single .xlsx file that is ready for analysis with no stray characters or formatting issues.
I need assistance entering debtor financial records into my Online Bankruptcy portal. This includes information from both paperwork and a thumbdrive. Requirements: - Accurate data entry - Organizing and categorizing financial records - Experience with online portals and financial documents Ideal Skills: - Attention to detail - Experience with financial data and legal documents - Proficiency in handling multiple file formats Please provide a clear and organized upload of all required information.
I need a data scraping expert to help generate leads from a list of websites. Requirements: - Scrape contact information, product listings, or user reviews (to be specified). - Work from a provided list of URLs. Ideal Skills: - Experience with data scraping tools and techniques. - Ability to handle multiple URLs and extract data accurately. - Attention to detail and reliability. Please share your portfolio and relevant experience.
Industrial Automation Product Data Extraction, Deduplication & Structured Image Collection Project Overview We are an industrial automation parts distributor building a structured product database to support inbound enquiries and SEO growth. We require an experienced data extraction specialist to: Extract structured product data from major industrial / electronic component distributor websites Identify duplicate manufacturer part numbers across multiple sources Merge all unique information into a single consolidated dataset Extract and organise all available product images per part number Deliver a clean, deduplicated, production-ready dataset This project includes: Data extraction Normalization Deduplication Intelligent merging Structured image collection and organisation...
Every event that I have, I take an average of 30 event orders, pull them apart and chronologically construct a production plan for moving in and setting up an event and moving it out at the end of the event. I would like to drop a text-based Event Order PDF into a dedicated local folder. The moment that file lands, I need an agent to wake up, read the document, and do three things automatically: 1. Pull out the essentials—event dates and times, the title of each order, and the Event Order Number. 2. Duplicate each order so it appears "chronologically" twice in the output: once for the install date and once again for the strike / move-out date. 3. Feed all rows into a single, combined Excel sheet that becomes my Production Plan (this is already formatted and I can send...
Project Description: Find school districts and charter schools who use a specific vendor for a large list of domains. I am seeking an experienced web scraping specialist to improve our Python script to analyze a large list of school district websites (approximately 4000+ URLs) and identify the ones who show a specific link on any page found in their sitemap. The primary method of identification must be to scan the website's for specific, known vendor links. Deliverables Required 1. A Production-Ready Python Script (.py file): The script must be commented, easily configurable, and capable of reading the provided CSV list, performing the scan, and generating the output CSV. It should handle timeouts and basic error handling gracefully. 2. The Final Results (CSV/Excel File): A c...
Hola, busco personas dispuestas a ayudarme a completar mi proyecto a tiempo. Proporcionaré toda la información necesaria para comprender y realizar el trabajo. Se prefieren principiantes. Gracias.
I need to confirm whether my wife's iPhone placed a voice call immediately before a video I shot in Hawaii just after midnight on September 7, 2025. The most likely destination is, but I want definitive proof—exact dialed number, start-to-end timestamps, and total duration. You may work with the raw data in whichever way is easiest: I can grant temporary, read-only access to my T-Mobile account, or pull the usage log myself and send you the file. Because I’m unsure of the precise moment the video begins, I’ll also need help extracting or confirming the video’s internal timestamp so we can align it with the carrier records. (A forensic review of the video itself was done by Md Nazmul H. in October 2025; that report is available if it helps.) Deliverables: &bu...
I need a small utility that takes the Daily Racing Form in PDF format for a chosen date, scans every race at every track, and pulls four pieces of information for every horse: • horse name • number of starters in that race • second-call position • final time With those values the script must run a straightforward (A + B) / C calculation that I will define exactly once the job begins. When the data for all horses is processed, the program should compile everything into a clean, easy-to-read PDF report. Please build the solution so I can drop in new Racing Form PDFs and generate fresh reports without extra tweaking. A Python workflow that uses libraries such as pdfplumber, Camelot or similar is fine, provided it copes with the occasional formatting quirk that R...
I need an .xlsm workbook whose VBA macro fetches product data from both and lowes.com. When I type a valid item or model number into a row, the code should automatically pull back: product name, full description, regular price, sale price (if available), brand, product type/category, and the main image (inserted into the sheet or stored in an Image column). I work comfortably with VBA, so a concise, well-commented routine is all I need—no step-by-step user guide. The workbook must stay self-contained, relying only on standard references such as Microsoft XML, HTML, or WinHTTP libraries; please avoid external add-ins or Python bridges. Deliverables: • Finished macro-enabled Excel file (.xlsm) ready to test with my own SKU list • Clearly commented VBA code so I can...
We are building a full internal marketplace analytics web system, not just a reporting script. The system is designed to combine competitive intelligence with internal sales and stock analytics in a single interface. Functional Requirements The system must provide the following capabilities: 1. Product and SKU structure - Each product must be split into individual SKUs based on flavor and volume. - All analytics and reports are built at the SKU level. 2. Our product analytics (primary focus) - Current stock levels (total and per SKU). - Sales volume for selected periods (daily / weekly / monthly). - Reorder recommendations based on stock thresholds and sales dynamics. - Revenue calculations per product and per SKU with period filtering. 3. Competitive analytics - Automated collection o...
Project Background We are building a financial data aggregation and risk control verification system. We need to retrieve account balances, UPI IDs, and transaction history from major Indian UPI wallet accounts for multi-account fund monitoring and transaction status verification. ⸻ Core Requirements All data retrieval must be done using API-level HTTP requests: • For wallets with a Web interface: analyze and call their Web API • For App-only wallets: capture and replicate API calls via traffic analysis API-level implementations only. ⸻ Responsibilities 1. Implement mobile number + OTP login flow (manual input allowed) with session persistence (to reduce repeated logins) 2. Analyze the API call flow of the target wallet (Web or App) 3. Extract account balance, UPI ID, and ...
I have an SD card full of irreplaceable shots that suddenly refuse to open—no thumbnails, no previews, nothing. A professional shop already ran their standard recovery tools but came back empty-handed, so I’m looking for deeper-level expertise. The card mounts and can be cloned; the issue is strictly file-level corruption, not physical damage. I can supply a raw image (dd) or, if you prefer, ship the card itself. What I need from you is straightforward: use whatever combination of low-level hex work, PhotoRec/TestDisk, R-Studio, or your preferred forensics workflow to extract as many intact, viewable photos as possible and return them to me in their original resolution and format. Deliverables • Recovered image files, clearly organised • A short report ou...
IM TYRING TO RUN THE ATTACHED JPNY SCRIPT TO GET INFO FROM A WEBSITE BUT I CANT UNDERSTAND IT DOESN'T WORK. I NEED THIS SCRIPT TO BE FIX + PAGINATION TO FETCH AROUND 2400 RECORDS FOR YELLOWPAGES I ONLY USE JUPYTER
Título del proyecto Automatización RPA en DATAX: Facturación/causación desde Excel con arquitectura escalable (futuro OCR + “memoria” por proveedor) Contexto Somos una firma contable en Colombia. Usamos DATAX para registrar/causar y/o facturar. Queremos eliminar digitación manual con un bot RPA. Empezaremos con Excel → RPA → DATAX, pero el diseño debe quedar preparado para escalar a: carpeta con facturas (PDF/XML) → extracción de datos → reglas contables por proveedor → ejecución en DATAX. Objetivo Fase 1 (MVP en 3–5 semanas) Implementar un bot RPA que, a partir de un Excel maestro, cree/registre (según aplique) facturas/causaciones en DATAX y genere evidencia de ejecución. A...
I have three specific school-website links that list all current teachers and administrators. From each page I need a clean scrape of every staff member’s name, role, email address, plus the city/town and the school name, compiled into a single Excel workbook. Alongside that, I already hold an Excel sheet that contains a roster of Tow and roadside drivers. The sheet has their names and the URLs of the companies they work for, but no contact details. Please crawl those company sites, locate each driver’s email address, and append the results to the same workbook, using matching columns so everything stays consistent. Key points to keep in mind: • Final deliverable: one Excel file ready for copy-and-paste outreach. • Source material: my three school websites and...
We are looking for a person with extensive and up-to-date understanding of the Autodesk Corporation's proprietary DXF format, so we can extract a set of a given type of objects from a .DXF text file and translate it into a set of coordinates reflecting the locations of those objects, based upon the Adobe PDF standard, where (0,0) is to be found in the lower left corner. All DXF files translated would use only 2D (two dimensional) coordinates. The resultant dataset would be in a simple .CSV file format.
I am looking for a Python developer to create a simple and focused scraper script for Facebook Marketplace. Project Idea: The script will open a single Facebook Marketplace seller page and: • Extract all product links belonging to that seller only • Ignore any other data (no names, no prices, no images) • The final output should be a list of links only • Each product link on a separate line (link under link) Exact Requirements: • Input: Facebook Marketplace seller page URL • Output: • A file containing all product URLs for that seller • File format: TXT or CSV • Handle infinite scrolling to load all products Technical Requirements: • Python • Selenium or Playwright • Experience with dynamic websites • Clean, ...
I have a set of voter-list PDFs released by the election commission. The layout across all files is identical, so positional parsing is reliable. Right now I simply need the current batch converted, but long-term I want a reusable Python utility that pulls the following six columns straight into Excel: • Name • FathersName • Age • Gender • VoterID • SerialNumber . Section Name . Polling Station Name .etc. Scope of work 1. Run the first extraction and hand me the .xlsx file so I can verify accuracy. 2. Package the underlying code (Python 3.x) with clear instructions and any so I can repeat the conversion on future lists without further help. Technical notes – Consistent layout means you can lean on libraries like pdfplumber, camelo...
I need every public phone number that appears on gathered into a single, well-structured Excel workbook. Please crawl the entire site, not just a few sections, and return each number alongside the key profile details that make the data usable at a glance—name, profile URL, and any other easily captured identifiers shown next to the number. A clean .xlsx with one row per profile, no duplicates, and clearly labelled columns is the only deliverable I’m expecting. If you prefer Python, Scrapy, Selenium, Beautiful Soup or a comparable stack, go ahead; I’m interested in results, not the specific toolset, as long as the script can be rerun later should the site content change. Before delivery, double-check that: • every row contains a valid phone number and url • n...
Quiero contar con un archivo .xlsx que contenga las 12 728 filas completas de la tabla pública que aparece en la web de INDECOPI (Perú). El sitio sólo muestra 10 registros por página, por lo que necesito que hagas la extracción automatizada y consolides todo en un solo libro de Excel. Campos requeridos • Nombre de la empresa • Nombre de la persona • Número de registro Todos los campos que aparecen publicados. Los entregarás en columnas con formato estándar, sin filtros, tablas dinámicas ni otras funcionalidades añadidas. Yo te facilitaré la URL exacta y los pasos de navegación para que ubiques la vista paginada. Una vez terminado, comprobaré que el total de filas coincida co...
I am assembling a scholarly reference list on co-therapy within group psychotherapy and need someone who can run targeted searches in both Medline and PsycINFO, extract the most relevant peer-reviewed papers, and format every citation in flawless APA 7 style. My primary interests are therapeutic techniques, patient outcomes, and group dynamics, and I would like the literature to reflect how these play out across different settings— inpatient, outpatient, as well as homogeneous and heterogeneous groups. To keep the work clear, here is what I expect you to hand over: • A clean, alphabetised list of the selected references, fully formatted in APA 7. • A brief search log showing the main keywords, limits, and databases used so I can replicate the strategy if needed. &bull...
I need a senior-level specialist to harvest product data from several e-commerce sites and deliver it in a single, well-structured CSV file. The task demands production-ready techniques—think Scrapy spiders hardened with rotating proxies, Selenium or Playwright for dynamic content, and solid anti-bot countermeasures. The information I’m after is very specific: product names, prices, pictures, and SKU. Nothing less, nothing more. Your solution must run reliably at scale, cope with frequent layout changes, and leave no trace that could trigger blocks. Python is the preferred stack, but if you have a proven alternative that meets the same bar, I’m open to hearing it. To be considered, include in your proposal: • At least one example of a comparable e-commerce scrapi...
Sila Dafter atau Log masuk untuk melihat butiran.
PDF to Excel Data Scraper Needed Job Title: Data Scraper Needed: Convert 24 PDF Factsheets to Clean Excel (Mutual Fund Portfolios) Project Overview: I need a freelancer to extract detailed stock portfolio data from ~24 Mutual Fund Monthly Factsheets (PDFs). I will provide the URLs/Files. Your job is to extract the full stock holdings table for specific funds and deliver a consolidated, clean Excel/CSV file. The Goal: I need the complete list of stocks (100% of the portfolio), NOT just the Top 10. The data is used for financial backtesting, so accuracy is critical. Even top 85-90% data works. Scope of Work: Input: ~24 PDF Files (Monthly Factsheets). Target Funds: For each month, extract data for the Top 10 Equity Funds (e.g., Bluechip, Midcap, Smallcap, Value Discovery, etc. - list wi...
I need an expert who can read and understand CS Khatiyan online land records written in Kaithi/old handwriting You must extract and explain details of my plot number(s) that I will provide from online bihar bhumi website Work Includes: Identify plot/khata details Owner name + father name Caste/community mentioned Land area + land type/class Any remarks or tenancy/mutation notes Explain in simple Hindi/English Any other land in the same owner name Deliverable: Clear written summary/report with all details. Skills Required: CS Khatiyan reading, Kaithi script understanding, Bihar land record knowledge.
I want a clean, well-commented Python 3 script that I can run locally whenever I need fresh information. The program should visit the target site (I’ll share the URL once we start) and pull Product Details exactly as they appear online. That means every time I point the script at a category or search page it should work through all pagination, capture the data, and save it to CSV or Excel so I can sort and analyse it later. Key points to cover • Use reliable, open-source libraries such as requests, BeautifulSoup, or Selenium—whichever gives the most stable results for the site once you see it. • Build in simple settings (URL, output file name, optional delay between requests) near the top of the file so I can tweak them without touching the core logic. • H...
I have a single PDF that holds roughly 50 scanned business cards, all in English. I need every card transcribed into a clean Excel sheet so I can import the contacts into my CRM without manual re-typing. Here’s what has to come across from each card: • Name • Job title • Company name • Company logo (please paste the image into its own cell or include a link to the extracted file) • Direct phone number(s) • Email address • Physical address Lay everything out in a standard column format—one row per card, one clearly labeled column per data point. Accuracy is key, so I’ll spot-check against the PDF; inconsistencies or missing fields will need correcting before final hand-off. I’m fine with whatever method you prefer&m...
I need webscraping expert to scrape data and export to excel from Indiegogo. Details I need for the projects are: Title: Project title. Category: The category of the project based on Indiegogo categorization system. Category: The sub-category of the project based on Indiegogo categorization system. Close Date: Close data of the campaign. Open Date: Open date of the campaign. Currency: Currency used for collected funds. Funds Raised: The amounts of funds raised. Funds Raised Percent: The percent of funds raised from the targeted funds. Funding Target: The targeted amounts of funds by the campaign initiator to be collected. Country: Country in which the project is based. Publisher: The name of the campaign initiator. Backers: The number of people who decided to fund the campaign. Updates: ...
I’m looking for a well-structured Python solution, built around BeautifulSoup (BS4) and any supportive libraries you deem essential, that reliably pulls both product details and customer reviews from Lazada on a daily schedule. The data will fuel ongoing competitor research, so consistency and clarity of the output are critical. I looking specifically to get data using bs4 by bypassing the captcha Here’s how I picture the flow: • Input: category URL(s) or product list I supply in a CSV/JSON. • Scrape: title, price, promos, specs, images, ratings, full review texts, review dates, and reviewer scores. • Output: clean CSV or JSON dropped into a dated folder after each run. Make the script easy to tweak if Lazada changes its markup. Acceptance criteria 1. S...
PDF to Excel Data Scraper Needed Job Title: Data Scraper Needed: Convert 24 PDF Factsheets to Clean Excel (Mutual Fund Portfolios) Project Overview: I need a freelancer to extract detailed stock portfolio data from ~24 Mutual Fund Monthly Factsheets (PDFs). I will provide the URLs/Files. Your job is to extract the full stock holdings table for specific funds and deliver a consolidated, clean Excel/CSV file. The Goal: I need the complete list of stocks (100% of the portfolio), NOT just the Top 10. The data is used for financial backtesting, so accuracy is critical. Even top 85-90% data works. Scope of Work: Input: ~24 PDF Files (Monthly Factsheets). Target Funds: For each month, extract data for the Top 10 Equity Funds (e.g., Bluechip, Midcap, Smallcap, Value Discovery, etc. - list wi...
Preciso de um especialista em web scraping para coletar informações específicas consultando CPF em um site. Campos necessários: - Nome completo - Data de nascimento - Endereço - E-mails - Telefones - Veículo (marca/modelo) - Ano de fabricação - Ocupação - Faixa salarial - Provável empresa Habilidades e Experiência Ideais: - Experiência comprovada em web scraping - Proficiência em ferramentas como Python, Beautiful Soup, Scrapy, ou similares - Capacidade de trabalhar com estruturas de dados complexas - Atenção aos detalhes e precisão na extração de dados - Familiaridade com questões legais e éticas de scraping de ...
I have a data-analysis pipeline that relies on a steady flow of fresh product images from a well-known e-commerce site. What I need is a robust scraper that can navigate the catalog, collect every product’s main and variant images, and deliver them to me neatly organized. Key points you should know: • Target: a single e-commerce platform (URL supplied after award). • Payload: high-resolution image files plus a CSV/JSON map linking each file to product ID, title, price, and category text that you extract during the same run. • Scale: thousands of products per crawl; a resumable approach is essential so partial failures don’t force a full restart. • Frequency: I’ll trigger the crawl weekly, so reusable code is a must. I’m happy with Pytho...
I need to obtain hard-to-reach details—specifically the IP address, associated phone number, and any location-related information—linked to one particular Telegram account. Standard OSINT searches have already been exhausted, so I’m explicitly open to advanced, purely technical hacking techniques that dig directly into Telegram traffic or MTProto behaviour. If this is within your skill set, tell me how you would approach the task, which tools or exploits you prefer to leverage, and what minimal input you require from my side (e.g., username, recent message, session file). Deliverables • Verified current or last-seen IP address for the target account • Recovered phone number (or clear statement if technically impossible) • Any additional address or geo...
I need OpenClaw on my dedicated Mac with three core capabilities: Chrome automation: open websites, click elements, fill forms, extract structured snippets, and return results in WhatsApp. Coding/app workflows: generate code locally and optionally interact with web dev platforms when commanded. Deep research workflows: run multi-step web research, compare sources, and return concise findings with references. Security and reliability are mandatory: least privilege, approved-user-only WhatsApp commands, startup on boot, restart on crash, logs, and health check.
Data entry is an important task, but choosing the wrong solution can seriously harm your company's productivity.
Learn how to hire and collaborate with a freelance Typeform Specialist to create impactful forms for your business.
A complete guide to finding, hiring, and working with a skilled freelance typist for your typing projects.