Extracting Stock Photos from Pexels.com Using Rust & SQLite: Scraping High-Resolution Images, Download Counts, and Photographer Information

Extracting Stock Photos from Pexels.com Using Rust & SQLite: Scraping High-Resolution Images, Download Counts, and Photographer Information

In the digital age, the demand for high-quality stock photos is ever-increasing. Websites like Pexels.com provide a vast repository of free stock images that can be used for various purposes. However, manually downloading these images and their associated metadata can be time-consuming. This article explores how to automate the process using Rust for web scraping and SQLite for data storage, focusing on extracting high-resolution images, download counts, and photographer information.

Understanding the Basics of Web Scraping

Web scraping is the process of extracting data from websites. It involves fetching a web page and extracting useful information from it. In this context, we aim to scrape images and metadata from Pexels.com. Rust, a systems programming language known for its performance and safety, is an excellent choice for this task due to its concurrency capabilities and memory safety features.

Before diving into the code, it’s essential to understand the legal and ethical considerations of web scraping. Always ensure that you comply with the website’s terms of service and robots.txt file. Pexels.com provides an API for developers, which is a more reliable and ethical way to access their data.

Setting Up the Rust Environment

To begin, you need to set up your Rust environment. Rust can be installed using the Rustup tool, which manages Rust versions and associated tools. Once installed, you can create a new Rust project using Cargo, Rust’s package manager and build system.

Here’s how you can set up a new Rust project:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
cargo new pexels_scraper
cd pexels_scraper
cargo new pexels_scraper cd pexels_scraper
cargo new pexels_scraper
cd pexels_scraper

Next, you’ll need to add dependencies to your Cargo.toml file. For web scraping, we’ll use the `reqwest` library for HTTP requests and `scraper` for parsing HTML. Additionally, we’ll use `tokio` for asynchronous programming.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[dependencies]
reqwest = { version = "0.11", features = ["json"] }
scraper = "0.12"
tokio = { version = "1", features = ["full"] }
[dependencies] reqwest = { version = "0.11", features = ["json"] } scraper = "0.12" tokio = { version = "1", features = ["full"] }
[dependencies]
reqwest = { version = "0.11", features = ["json"] }
scraper = "0.12"
tokio = { version = "1", features = ["full"] }

Scraping High-Resolution Images and Metadata

With the environment set up, we can start writing the Rust code to scrape data from Pexels.com. The goal is to extract high-resolution images, download counts, and photographer information. We’ll use the Pexels API for this purpose, which requires an API key.

First, create a function to fetch data from the Pexels API. This function will make an HTTP GET request to the API endpoint and parse the JSON response.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
use reqwest::Client;
use serde_json::Value;
async fn fetch_images(api_key: &str) -> Result {
let client = Client::new();
let res = client
.get("https://api.pexels.com/v1/search?query=nature&per_page=10")
.header("Authorization", api_key)
.send()
.await?;
let json: Value = res.json().await?;
Ok(json)
}
use reqwest::Client; use serde_json::Value; async fn fetch_images(api_key: &str) -> Result { let client = Client::new(); let res = client .get("https://api.pexels.com/v1/search?query=nature&per_page=10") .header("Authorization", api_key) .send() .await?; let json: Value = res.json().await?; Ok(json) }
use reqwest::Client;
use serde_json::Value;

async fn fetch_images(api_key: &str) -> Result {
    let client = Client::new();
    let res = client
        .get("https://api.pexels.com/v1/search?query=nature&per_page=10")
        .header("Authorization", api_key)
        .send()
        .await?;

    let json: Value = res.json().await?;
    Ok(json)
}

Next, parse the JSON response to extract the desired information. We’ll focus on the image URL, download count, and photographer name.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
fn parse_images(json: &Value) {
if let Some(photos) = json["photos"].as_array() {
for photo in photos {
let url = photo["src"]["original"].as_str().unwrap_or("");
let photographer = photo["photographer"].as_str().unwrap_or("");
let download_count = photo["downloads"].as_u64().unwrap_or(0);
println!("URL: {}, Photographer: {}, Downloads: {}", url, photographer, download_count);
}
}
}
fn parse_images(json: &Value) { if let Some(photos) = json["photos"].as_array() { for photo in photos { let url = photo["src"]["original"].as_str().unwrap_or(""); let photographer = photo["photographer"].as_str().unwrap_or(""); let download_count = photo["downloads"].as_u64().unwrap_or(0); println!("URL: {}, Photographer: {}, Downloads: {}", url, photographer, download_count); } } }
fn parse_images(json: &Value) {
    if let Some(photos) = json["photos"].as_array() {
        for photo in photos {
            let url = photo["src"]["original"].as_str().unwrap_or("");
            let photographer = photo["photographer"].as_str().unwrap_or("");
            let download_count = photo["downloads"].as_u64().unwrap_or(0);

            println!("URL: {}, Photographer: {}, Downloads: {}", url, photographer, download_count);
        }
    }
}

Storing Data in SQLite

Once we have the data, the next step is to store it in a database for easy retrieval and analysis. SQLite is a lightweight, file-based database that is perfect for this task. We’ll use the `rusqlite` crate to interact with SQLite in Rust.

First, add `rusqlite` to your Cargo.toml file:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
[dependencies]
rusqlite = "0.26"
[dependencies] rusqlite = "0.26"
[dependencies]
rusqlite = "0.26"

Create a function to set up the SQLite database and insert the scraped data.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
use rusqlite::{params, Connection, Result};
fn setup_database() -> Result {
let conn = Connection::open("pexels.db")?;
conn.execute(
"CREATE TABLE IF NOT EXISTS images (
id INTEGER PRIMARY KEY,
url TEXT NOT NULL,
photographer TEXT NOT NULL,
download_count INTEGER
)",
[],
)?;
Ok(conn)
}
fn insert_image(conn: &Connection, url: &str, photographer: &str, download_count: u64) -> Result {
conn.execute(
"INSERT INTO images (url, photographer, download_count) VALUES (?1, ?2, ?3)",
params![url, photographer, download_count],
)?;
Ok(())
}
use rusqlite::{params, Connection, Result}; fn setup_database() -> Result { let conn = Connection::open("pexels.db")?; conn.execute( "CREATE TABLE IF NOT EXISTS images ( id INTEGER PRIMARY KEY, url TEXT NOT NULL, photographer TEXT NOT NULL, download_count INTEGER )", [], )?; Ok(conn) } fn insert_image(conn: &Connection, url: &str, photographer: &str, download_count: u64) -> Result { conn.execute( "INSERT INTO images (url, photographer, download_count) VALUES (?1, ?2, ?3)", params![url, photographer, download_count], )?; Ok(()) }
use rusqlite::{params, Connection, Result};

fn setup_database() -> Result {
    let conn = Connection::open("pexels.db")?;
    conn.execute(
        "CREATE TABLE IF NOT EXISTS images (
            id INTEGER PRIMARY KEY,
            url TEXT NOT NULL,
            photographer TEXT NOT NULL,
            download_count INTEGER
        )",
        [],
    )?;
    Ok(conn)
}

fn insert_image(conn: &Connection, url: &str, photographer: &str, download_count: u64) -> Result {
    conn.execute(
        "INSERT INTO images (url, photographer, download_count) VALUES (?1, ?2, ?3)",
        params![url, photographer, download_count],
    )?;
    Ok(())
}

Finally, integrate the database functions with the scraping logic to store the extracted data.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
#[tokio::main]
async fn main() -> Result<(), Box> {
let api_key = "YOUR_PEXELS_API_KEY";
let json = fetch_images(api_key).await?;
let conn = setup_database()?;
if let Some(photos) = json["photos"].as_array() {
for photo in photos {
let url = photo["src"]["original"].as_str().unwrap_or("");
let photographer = photo["photographer"].as_str().unwrap_or("");
let download_count = photo["downloads"].as_u64().unwrap_or(0);
insert_image(&conn, url, photographer, download_count)?;
}
}
Ok(())
}
#[tokio::main] async fn main() -> Result<(), Box> { let api_key = "YOUR_PEXELS_API_KEY"; let json = fetch_images(api_key).await?; let conn = setup_database()?; if let Some(photos) = json["photos"].as_array() { for photo in photos { let url = photo["src"]["original"].as_str().unwrap_or(""); let photographer = photo["photographer"].as_str().unwrap_or(""); let download_count = photo["downloads"].as_u64().unwrap_or(0); insert_image(&conn, url, photographer, download_count)?; } } Ok(()) }
#[tokio::main]
async fn main() -> Result<(), Box> {
    let api_key = "YOUR_PEXELS_API_KEY";
    let json = fetch_images(api_key).await?;
    let conn = setup_database()?;

    if let Some(photos) = json["photos"].as_array() {
        for photo in photos {
            let url = photo["src"]["original"].as_str().unwrap_or("");
            let photographer = photo["photographer"].as_str().unwrap_or("");
            let download_count = photo["downloads"].as_u64().unwrap_or(0);

            insert_image(&conn, url, photographer, download_count)?;
        }
    }

    Ok(())
}

Conclusion

In this article, we explored how to automate the extraction of high-resolution images and metadata from Pexels.com using Rust and SQLite. By leveraging Rust’s performance and safety features, we efficiently scraped data and stored it in a SQLite database for easy access. This approach not only saves time but also ensures that you have a structured way to manage and analyze the data. As you continue to explore web scraping, always remember to adhere to ethical guidelines and respect the terms of service of the websites you

Responses

Related blogs

an introduction to web scraping with NodeJS and Firebase. A futuristic display showcases NodeJS code extrac
parsing XML using Ruby and Firebase. A high-tech display showcases Ruby code parsing XML data structure
handling timeouts in Python Requests with Firebase. A high-tech display showcases Python code implement
downloading a file with cURL in Ruby and Firebase. A high-tech display showcases Ruby code using cURL t