How To Make a cURL GET Request in PHP

Have you been curious about how to make a cURL GET request in PHP? If so, you’re in the right place. This guide will explain everything you need to know.

Read on for a breakdown of these requests, with tips on troubleshooting to ensure a successful request with every attempt.

Try Our Residential Proxies Today!

What Is a GET Request?

learn about curl get request in php

Let’s start by breaking down the different components involved in a cURL GET request, beginning with the GET request itself.

A GET request is one of the HTTP (Hypertext Transfer Protocol) methods used for requesting data from a specified resource on a server. It is one of the fundamental methods in HTTP and is primarily used to retrieve data from a server.

When you enter a URL in your web browser and press Enter, the browser typically sends a GET request to the server associated with that URL.

What Is cURL?

‘cURL’ is a command-line tool and library for transferring data with URLs. It supports various protocols and offers a variety of features. Here are some of the most important ones:

  • URL Support: cURL supports a many different protocols, including HTTP, HTTPS, SCP, SFTP, LDAP, FTP, FTPS, and more.
  • Data Transfer: It allows for data transfer in both directions, i.e., uploading and downloading files.
  • HTTP Methods: cURL supports various HTTP methods, including GET, POST, PUT, DELETE, HEAD, etc.
  • Custom Headers: Users can add custom headers to their HTTP requests using the -H option.
  • Authentication: cURL supports different authentication methods, such as Basic, Digest, NTLM, and more.
  • Follow Redirects: It can automatically follow HTTP redirects with the -L option.
  • Resume Downloads: The- C option allows the resumption of interrupted downloads.
  • Proxy Support: cURL can work through proxy servers and supports various proxy protocols.
  • Cookies: Supports handling cookies, both sending and receiving them.
  • User-Agent: Users can use the -A option to set a custom User-Agent string for HTTP requests.

These features make cURL a versatile and powerful tool for performing a wide range of network-related tasks from the command line or in scripts.

What Is a cURL GET Request?

get request

When you use cURL to perform a GET request, you are using it to retrieve data from a specified URL.

This request sends an HTTP request to a server and then gets the response body, typically containing a page’s content or other requested data. You can also retrieve response headers, which provide more details, including content type, server details, content length, etc.

What Is PHP?

PHP (Hypertext Preprocessor) is a widely used server-side scripting language that is particularly well-suited for web development. It is an open-source language, meaning its source code is freely available and can be modified and distributed.

PHP is embedded in HTML and executed on the server, generating dynamic content that is then sent to the client’s browser.

How to Make a cURL GET Request in PHP

get request in php

Now that you understand the basics, let’s dive in and discuss making a PHP cURL GET request. Here’s a PHP cURL post example, broken down step by step:

Step 1: Ensure cURL support

To confirm whether cURL support is enabled in your PHP installation, you can use the function_exists function to check if the curl_init function is available. The curl_init function is a part of the cURL extension in PHP.

Here’s a simple script to check for cURL support:


Copy code


// Check if cURL extension is enabled

if (function_exists(‘curl_init’)) {

echo ‘cURL support is enabled’;

} else {

echo ‘cURL support is not enabled’;



Save this script as a .php file and run it on your web server. If cURL support is enabled, you will see the message “cURL support is enabled.” You’ll see the message “cURL support is not enabled” if it’s not enabled.

How to enable the PHP cURL extension

If you find that cURL support is not enabled, you must enable the PHP cURL extension.

To do this in Apache, you’ll need to take the following steps:

  • Locate the PHP.ini file (it is typically found in the root folder or public_html of the server), then open the PHP.ini in a text editor
  • Search or the ;extension=php_curl.dll with Ctrl+F and delete the semi-colon ‘;’ to activate it
  • Use Ctrs+S to close and save PHP.ini and then restart Apache

If you use WAMP, you should follow these steps:

  • Left-click on the WAMP server icon
  • Select PHP -> PHP Extensions -> curl

To enable cURL in Ubuntu, run the following command:

sudo apt-get install php5-curl

Then, run this one:

sudo service apache2 restart

Step 2: Initiate a cURL session

After confirming that cURL is enabled, create a new file named curl.php in your project root folder. Then, add the code featured below to your new file:


$ch = curl_init();

Once you do this, you’ll initiate a cURL session using the curl_init() function.

Step 3: Make a PHP cURL Post Requestz

In PHP, you can make a cURL GET request using the curl_init() function to initialize a cURL session, set various options with curl_setopt(), execute the request with curl_exec(), and finally close the cURL session with curl_close().

Here’s a basic example:


Copy code


// URL to make a GET request to

$url = “”;

// Initialize cURL session

$ch = curl_init($url);

// Set cURL options

curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); // Return the response instead of outputting it

// Execute cURL session and store the response in $response

$response = curl_exec($ch);

// Check for cURL errors


// Handle errors if any

echo ‘Curl error: ‘ . curl_error($ch);


// Close cURL session


// Display the response

echo $response;


This example demonstrates a simple cURL GET request to The CURLOPT_RETURNTRANSFER option is set to “true” to ensure the response is returned as a string rather than directly output. You should also check for cURL errors using curl_errno().

You can customize the request by adding more options using curl_setopt() as needed. For example, you might want to set headers, handle redirects, or specify additional cURL settings.

GET cURL Request Troubleshooting

curl request troubleshooting

What should you do if your GET cURL request isn’t working?

Troubleshooting a cURL request involves checking various aspects to identify potential issues. Here are some steps you can take to troubleshoot a GET cURL request:

Check cURL installation

Ensure that cURL is installed on your system. You can check this by running the following command in your terminal:

curl –version

You may need to install it based on your operating system if it’s not already set up.

Try a basic cURL request

Try a basic cURL request to a known URL to check if cURL is working.

For example:


If this doesn’t work, it could indicate network issues or problems with the cURL installation.

Verbose mode

Run your cURL request in verbose mode (-v or –verbose). Doing so will provide detailed information about the request and response, helping you identify where the issue might be:

curl -v

SSL/TLS issues

If you are making requests to an HTTPS endpoint, SSL/TLS issues might be the cause. Ensure that the server’s SSL certificate is valid.

You can use the –insecure option to bypass SSL verification:

curl –insecure

Check the URL and parameters

Double-check the URL and any parameters you are passing. Ensure that they are correctly formatted and that there are no typos.

Firewall and network issues

Verify that your server or network allows outgoing connections on the required port. Firewalls or network restrictions could prevent cURL requests.

User-agent and headers

Some servers may require a specific User-Agent header. Include headers if needed using the -H option:

curl -H “User-Agent: Your-User-Agent”

HTTP status code

Pay attention to the HTTP status code returned by the server. A status code in the 4xx or 5xx range indicates an issue on the server or with the request.

Response content

Inspect the response content for any error messages or clues about what might be going wrong. You can use the -i option to include headers in the output:

curl -i

Proxy configuration

If you are behind a proxy, ensure that cURL is configured to use it. Set the proxy details with the cURL -x GET option:

curl -x http://your-proxy:port


Set a reasonable timeout value with the –max-time option to avoid long waits for unresponsive servers:

curl –max-time 10

How to Use cURL for Web Scraping

curl for web scraping

Now that you know how to cURL in PHP, you can also use cURL for other tasks, including web scraping. The good news is that you already know the most fundamental cURL command — sending an HTTP GET request.

Here are some other commands you’ll need to keep in mind:

Saving the web page content to a file

You can use cURL to download files from a web server. To save content instead of displaying it, use the -o or –output flag followed by a filename:

curl -o output.html

This command saves web page content in a file named output.html.

Following redirects

Some websites send users to a different URL using HTTP redirects. For cURL to automatically follow redirects, use the -L or –location flag:

curl -L

Customizing user-agent

Some websites may block or serve different content based on the requesting client’s user agent. To bypass such restrictions, you can use the -A or –user-agent flag:

curl -A “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ Safari/537.36”

Scrape specific data

Remember that cURL doesn’t parse HTML. You’ll need to use it in combination with other command-line tools like grep to extract specific data.

For example, you might use the following code to fetch content between the h1 tags:

curl | grep -o ‘<h1.*</h1>’

Send POST requests

A POST request is used to submit data to be processed to a specified resource. If the data you need is behind a form, you might need to send a POST request like this:

curl -d “param1=value1&param2=value2” -X POST

You can also use this method to register or log into an account when the form is using the POST request method.

Handle multiple pages

Your data will likely spread across multiple pages. In this case, you must loop over page numbers and replace them in the URL.

Say you want to request three different pages with the following structure:

This is what the bash code would look like:


# Base URL for the book catalog


# Loop through the first three pages

for i in {1..3}; do

# Construct the full URL


# Use curl to fetch the content and save it to a file

curl -o “page-${i}.html” “$url”

# Wait for a second to be polite and not overload the server

sleep 1


The sleep request prevents you from sending too many requests at once.

Challenges with Using cURL for Web Scraping

some common issue on web scraping using curl

While you can technically use cURL for web scraping, there are some downsides. Here are some potential obstacles you might run into:

No JavaScript execution

cURL cannot execute JavaScript. If a website relies on JavaScript to load content dynamically, then cURL will not have access to that content. If that happens, you’ll have to use a headless browser like Puppeteer or Selenium.

No parsing tool

cURL is not designed to parse HTML or extract specific data. Instead, it simply retrieves the raw data.

Because of this, you must use cURL in conjunction with other tools or languages that can parse HTML, which can be frustrating for some.

Limited debugging features

cURL does not have extensive debugging capabilities. Therefore, understanding errors requires a solid grasp of HTTP.

No web page interaction

cURL cannot interact with web pages, fill out forms, or simulate clicks. These limitations hinder its scraping capabilities when you’re working with more dynamic websites. You’d have to use a headless browser to circumvent the issue.

Using cURL with Proxies

A proxy is an intermediary server or software that acts as a bridge between a user’s device and the destination server or resource they are trying to access.

When you use a proxy, your requests first get sent to the proxy server, which then forwards those requests to the target server. The target server will then respond to the proxy, and the proxy, in turn, will forward the response back to your device.

There are several reasons to consider using proxies, including the following:

  • Privacy and Anonymity: Proxies can mask your IP address, providing anonymity when accessing websites. This anonymity is often used to protect user identity and location information.
  • Access Control: A proxy can block access to certain websites or content for security reasons or to ensure compliance with organizational policies.
  • Content Filtering: Proxies can filter content, blocking access to specific websites or content categories based on predefined rules.
  • Load Balancing: Proxies can distribute network or application traffic across multiple servers to balance the load and improve performance.
  • Caching: Proxies can cache frequently accessed resources, reducing the load on the destination server and improving response times for users.
  • Bypassing Restrictions: In some cases, proxies can be used to bypass geographical restrictions imposed by websites or services, allowing users to access content that may be restricted in their region.

Proxies play a vital role in web scraping because they allow you to work around rate limits, avoid issues like IP blocking, and keep a sense of anonymity. Incorporating proxies into your cURL commands can improve web scraping efficiency and reliability, helping you get more done in less time.

cURL also makes it easy for you to use proxies in your web scraping tasks.

To use a proxy with cURL, you would simply include the -x or –proxy option followed by the proxy address and port. For example, you could use the following line of code:

curl –proxy “http://proxy_address:port” “”

How to Choose a Proxy Provider

Using proxies will help significantly with your web scraping efforts. How do you choose a proxy provider, though?

Keep the following tips in mind when selecting a proxy provider to find one that can meet all of your web scraping needs:

Type of proxies offered

You can utilize several different types of proxies. The following are some of the most well-known options:

  • Residential Proxies: These use IP addresses assigned by Internet Service Providers (ISPs) to residential users. They offer a greater sense of legitimacy compared to some other proxies.
  • Datacenter Proxies: These are IP addresses from data centers, providing speed but potentially less authenticity.
  • Mobile Proxies: IP addresses assigned to mobile devices by mobile internet service providers (ISPs).
  • ISP Proxies: Rayboyte is the largest U.S.-based proxy provider offering ISP proxies.

Consider the type of proxy you need and make sure the provider can deliver them.

Geographical coverage

Find out the locations where the proxy servers are available. Depending on your unique use case, you might need proxies in specific countries or regions.

Reliability and uptime

Look for a provider with a high level of reliability and uptime. Downtime can disrupt your activities and interfere with your productivity, so it’s critical that you choose a provider with a proven track record of stability.

Speed and performance

Test the speed of the proxies for your specific use case. Some providers may offer faster connections than others, which can be crucial for tasks such as web scraping or accessing data in real time.

Security and anonymity

Ensure the proxy provider offers secure connections. Check for features like HTTPS support, encryption, and assurances of anonymity. For some applications, it may be essential to have proxies that do not reveal your actual IP address.

Protocols supported

Different applications and use cases may require specific protocols. Check whether the provider supports protocols like HTTP, HTTPS, SOCKS, etc., before you make a final decision.


Consider whether the provider can scale with your needs. For example, if your usage increases, you’ll want a provider that can accommodate growing demands.

Customer support

Look for a provider with responsive customer support. Issues or questions may arise, especially at the beginning of your proxy journey, and having reliable support can be crucial for a smooth experience.

Trial periods or money-back guarantee

Many proxy providers offer trial periods or money-back guarantees. Take advantage of these to test the service and see if it meets your requirements.


Compare pricing plans among different providers and look for those that are transparent about their costs and what you get in exchange for a particular price. Consider not only the upfront costs but also any potential overage charges or limitations on data transfer.

Reviews and Reputation

Don’t forget to do your research and read reviews about the proxy provider. Feedback from other users can provide valuable insights into the reliability and performance of the service before you invest your hard-earned money.

Try Our Residential Proxies Today!


conclusion on get request in php

Now that you understand the basics of making a cURL GET request in PHP, as well as how you can use cURL for other tasks like web scraping, it’s time to get to work. Follow the tips and guidelines shared above to get started.

If you want to use cURL for web scraping, you definitely need proxies. That’s where Rayobyte comes in.

Rayboyte is the largest U.S.-based proxy provider offering residential, ISP, data center, and mobile proxies. Get in touch today to learn more about our services, or start a free trial.

The information contained within this article, including information posted by official staff, guest-submitted material, message board postings, or other third-party material is presented solely for the purposes of education and furtherance of the knowledge of the reader. All trademarks used in this publication are hereby acknowledged as the property of their respective owners.

Sign Up for our Mailing List

To get exclusive deals and more information about proxies.

Start a risk-free, money-back guarantee trial today and see the Rayobyte
difference for yourself!