-
How to scrape job listings from a recruitment website?
Scraping job listings is a valuable way to gather data about job trends and opportunities. Recruitment websites often have structured data for job titles, descriptions, locations, and salaries, making them suitable for scraping. Begin by inspecting the site’s HTML to identify patterns in the job postings. For static sites, libraries like BeautifulSoup are effective. However, for sites with dynamic content or infinite scrolling, Selenium or Puppeteer may be needed to load and extract all job postings.
Here’s an example of scraping job listings using BeautifulSoup:import requests from bs4 import BeautifulSoup url = "https://example.com/jobs" headers = {"User-Agent": "Mozilla/5.0"} response = requests.get(url, headers=headers) if response.status_code == 200: soup = BeautifulSoup(response.content, "html.parser") jobs = soup.find_all("div", class_="job-listing") for job in jobs: title = job.find("h3", class_="job-title").text.strip() location = job.find("span", class_="job-location").text.strip() print(f"Title: {title}, Location: {location}") else: print("Failed to fetch job listings.")
For sites with advanced features like filters or search options, browser automation tools are helpful. It’s also important to include proper error handling and respect the website’s terms of use. How do you manage scraping when job listings are spread across multiple pages?
Sorry, there were no replies found.
Log in to reply.