News Feed Forums General Web Scraping How to scrape freelancer profiles from Fiverr.com using JavaScript?

  • How to scrape freelancer profiles from Fiverr.com using JavaScript?

    Posted by Heli Burhan on 12/20/2024 at 7:05 am

    Scraping freelancer profiles from Fiverr.com using JavaScript can provide valuable insights into freelancer skills, pricing, and reviews. Using Node.js with Puppeteer, you can extract structured data by automating browser interactions to ensure all JavaScript-rendered content is fully loaded before scraping. This process involves navigating to Fiverr’s category pages, identifying elements containing freelancer details, and extracting relevant information such as names, gig descriptions, and pricing. Puppeteer is especially helpful for handling dynamic content and simulating user-like behavior. Below is an example script for scraping Fiverr freelancer profiles.

    const puppeteer = require('puppeteer');
    (async () => {
        const browser = await puppeteer.launch({ headless: true });
        const page = await browser.newPage();
        const url = 'https://www.fiverr.com/categories/graphics-design';
        await page.goto(url, { waitUntil: 'networkidle2' });
        const freelancers = await page.evaluate(() => {
            const gigs = [];
            const items = document.querySelectorAll('.gig-card-layout');
            items.forEach(item => {
                const name = item.querySelector('.seller-name')?.textContent.trim() || 'Name not available';
                const gigTitle = item.querySelector('.gig-title')?.textContent.trim() || 'Gig title not available';
                const price = item.querySelector('.price')?.textContent.trim() || 'Price not available';
                gigs.push({ name, gigTitle, price });
            });
            return gigs;
        });
        console.log(freelancers);
        await browser.close();
    })();
    

    This script navigates to Fiverr’s Graphics & Design category, waits for the content to load, and extracts freelancer names, gig titles, and prices. Adding functionality for pagination allows scraping data from all pages within a category. Delays between requests and user-agent rotation help avoid detection by Fiverr’s anti-scraping mechanisms. Storing the extracted data in a structured format such as JSON or a database allows for efficient analysis and long-term use.

    Sultan Miela replied 2 days, 5 hours ago 4 Members · 3 Replies
  • 3 Replies
  • Hadriana Misaki

    Member
    12/24/2024 at 6:44 am

    Adding pagination support is crucial for scraping all freelancer profiles from Fiverr.com. The site displays a limited number of profiles per page, so navigating through all pages ensures that you collect a complete dataset. This can be done by identifying the “Next” button or pagination links and automating the process of clicking and scraping each page. Introducing random delays between requests mimics human behavior, reducing the risk of detection. With proper pagination handling, you can capture a broader range of freelancer data.

  • Taliesin Clisthenes

    Member
    01/03/2025 at 7:29 am

    Error handling ensures the scraper remains functional even if Fiverr’s site structure changes. For example, if a freelancer doesn’t have a price or review count displayed, the scraper should skip that profile gracefully without crashing. Adding try-catch blocks or conditional checks for null values can help maintain the scraper’s reliability. Logging skipped profiles or issues also provides insights into potential areas for improvement. Regularly testing and updating the scraper ensures it adapts to Fiverr’s changes over time.

  • Sultan Miela

    Member
    01/20/2025 at 1:51 pm

    Using proxies and rotating user-agent headers is vital for avoiding detection by Fiverr’s anti-bot systems. Sending multiple requests from the same IP address can lead to blocks, so using rotating proxies ensures that traffic appears distributed. Randomizing user-agent headers further mimics real users by simulating different browsers and devices. These techniques, combined with random request intervals, reduce the likelihood of being flagged as a bot. Such precautions are especially important for large-scale scraping projects.

Log in to reply.