How to Download Files and Web Pages Using Wget in Python

This tutorial provides you with insight into how to use Wget and why it offers benefits to many. It compares and contrasts the Python requests liability with Wget, providing valuable information and insights you can apply to your next task. Wget, which is a potent command-line tool for file downloads, is a necessary factor for today’s busy professional. Here’s what you can expect from the entire process of adopting Python Wget connectivity.

Try Our Residential Proxies Today!

What is Wget?

learn about wget

Before you can learn about Wget in Python, you need to understand what it is and why it is becoming a popular strategy for various application needs.

GNU Wget is a type of software package designed for retrieving files using HTTP, HTTPS, FTP, and FTPS. These are the most widely used internet protocols, creating a valuable way of gathering data across most functional aspects of the company.

Wget is called a non-interactive command line tool. As a result, it can be called from various points including scripts, cron jobs, and terminals without X-Windows support, among many others. It is natively installed in most Unix-like operating systems. You can also use it with Windows.

There are various reasons why some prefer to use Wget. Consider, for example, the wide range of features it offers that make retrieving large files effortless. It also can help with mirroring entire web or FTP sites with ease. Key features that help Wget to stand out in these areas include:

  • It has the ability to resume aborted downloads. This is done using REST and RANGE features.
  • It can use filename wild cards as well as recursively mirror directories with ease.
  • It supports both HTTP cookies and HPPT proxies.
  • It offers NLS-based message files for numerous languages, making it exceptionally easy to use.
  • It converts absolute links in downloaded documents to relative (when desired). This allows downloaded documents to maintain links to each other locally.
  • It runs on virtually all UNIX-like operating systems and Microsoft Windows, providing excellent accessibility for most users.
  • Wget supports persistent HTTP connections.
  • It offers unattended and background operational features.
  • It utilizes local file timestamps. This enables the determination of when documents must be re-downloaded when mirroring websites.

With comprehensive features like this, this tool is perhaps the easiest resource for those who need it. Yet, when it comes to Python Wget connectivity, there are a few additional factors to keep in mind.

Why You Should Consider Wget Instead of a Python Package Like Requests?

why wget is prefer than others

Wget is certainly a pretty feature-filled and easy-to-use command line. There are many benefits to using it. However, the question is, why should you consider using it to download files instead of using what you are already doing now? That’s especially true for those already using popular libraries like Requests. Does it offer benefits compared to downloading files with other popular libraries?

If you compare just the use of Wget over Requests, you’ll see several key benefits that come from making the switch (depending on your needs and goals.)

The benefit that comes from the use of Wget in Python is hard to miss. Some of those benefits include:

  • Support for protocols: Wget provides far more support than Requests does, providing better solutions.
  • Resumption of tasks: Though not possible with Requests (at least in the simplest of forms) Wget will resume aborted or interrupted downloads. This enhances efficiencies.
  • Bandwidth speed monitoring: Another huge concern for most organizations is the amount of network bandwidth a solution offers. Wget supports the specification of a limited download speed. The direct result of this is that it doesn’t consume your network’s bandwidth as aggressively.
  • Language function: As noted, Wget offers NLS-based message files for far more languages, which improves its overall ability to meet specific objectives.
  • HPPT connections: Another benefit over Requests is that Wget supports HTTP and HTTPS proxies.
  • Conversion: Wget has the ability to convert absolute links in downloaded documents to create relative links, which is something that is not done with Requests.
  • Wildcards: Wget also supports filenames and network locations with wildcards, which is not readily available through Requests.
  • Unattended: Another key benefit is that it can perform unattended and background downloading operations when it is desired, which is limited in Requests.
  • Timestamps: Also beneficial in some applications is that it can use local file timestamps to determine when a document must be updated and re-downloaded if you are mirroring.
  • Robot exclusion rules: Also notable, Wget respects robot exclusion rules defined in robots.txt automatically, which is not something that occurs readily with Requests.

These are some of the core differences in these applications. In many ways, Wget Python applications are the route to take, but it is critical to understand the framework. Wget offers far more flexibility through numerous features (more than those listed here), and that can enhance productivity and efficiency while also streamlining operations overall. For these reasons, Wget is a powerful and special tool to use – and it can be compared to any Python HTTP client library in the same way.

If you have specific concerns and needs, you can use the Wget manual to provide additional insights into functionality and overall features (and there are some limitations to utilizing this option as well).

Now that you know a bit more about what Wget is, it’s time to see how it can handle the tasks you have in an efficient and streamlined manner.

How Can Wget Help You?

install wget

That long list of features can apply to a variety of tasks. However, what is most important for those who are downloading files is how Wget works to do this.

It can get follow links in HTML pages and downloaded files referenced within those pages. This can be an excellent benefit, especially when you need to retrieve the entire website.

That means that, when it comes to web crawling, Wget is an excellent tool overall.

Also, note that Wget is an excellent choice for writing scripts that you need to download files for or web pages from the web. For all of these reasons, this could be an excellent time to consider utilizing Python and Wget together to achieve your goals.

What Is the Wget Command?

This is a popular question. In short, Wget is a command-line tool that enables you to download files from the internet to your active directory. To do this, it operates in the background. And as a result of that, it can continue the tasks even when you are not online.

Wget is created in portable C and is usable, as noted, on any Unix system. If you need to use another system – including Mac OS X, Microsoft Windows, AmigaOS, or other platforms, you can implement it as well with a few extra steps.

How Do You Install Wget?

The installation and set-up process for Wget is a bit different depending on the operating system you are using. Here is what you need to know:

  • Linux system: Most of the time, it should be pre-installed on your Linux device. If it is not, then you can use your distro’s package manager to locate it and then download it.
  • On Windows system: To do this, download the Wget binary for Windows. You will need to place it in a folder on your system. Add the Wget binary path to your PATH environment variable. An example of this would be D:\Program Files (x86)\Wget.
  • On Max OS: Install with Homebrew or an associated tool.

The next step is to ensure you have the right version of Python in place. It is best to have at least Python 3+ installed on your machine (anything other than that will not work well). If you have not done so yet, you can download the installer. Then, double click on it and just follow through the simple onscreen instructions.

Also beneficial is to use a Python IDE. Some of the recommendations for this specific need include Visual Studio Code with the Python extension, or you can use PyCharm Community Edition. Choose whichever you prefer.

How to Set Up a Python Project Using Wget

wget with python

Now that you have a good foundation to start from, let’s work on building out your first project. To create a Wget Python project within a virtual environment, follow the following commands:

mkdir wget-python-demo

cd wget-python-demo

python -m venv env

When you use this directory: Wget – python- demo directory, it represents your project folder. Your next step, then, is to get it into your Python IDE. You will need to then create a file and then, as you would normally, initialize it. You can do that through this command:

print(‘Hello, World!’)

Of course, you likely know that this is just a simple script, and it will print “Hello, World!” within your terminal.

Ultimately, it will include the Wget integration logic. Before you do that, though, you need to make sure you have the script written out properly. For that reason, find and press the run button within your IDE. If you would like to do so, you can also use the command:


When you do this, you should see the following in the terminal:

Hello, World!

Pretty straightforward. As long as you have that right, you can move on. This ensures you have a Python project that is ready to go. (Yes, you probably know how to do this thus far, but it is important to see the process from the beginning.)

The next step is to get to use Wget in the process.

Command Line Interface Explained

The next step will involve writing and executing a CLI command. A CLI (Command Line Interface) is a text-based interface. It allows you to input commands that interact with the operating system of your computer. It does this by inputting lines of text that is called a command line. The user interface is used to run programs as well as manage the files on the computer and interact with the computer directly.

How to Write a Function to Execute CLI Commands

write function

The next task to learn is how to execute CLI commands. The easiest way to do this is through the Subprocess Module. This is the most direct method to run CLI commands in the Python script. You are likely familiar with the Subprocess Module, but if you are not, you can take a closer look at it now.

This is a library within the Python Standard Library. It lets you do several things, including:

  • Spawn new processes
  • Connect input and output and error pipes
  • Obtain return codes

The fact is, this library gives you everything necessary to execute commands in the Python terminal. To help you learn a bit more about how this works, input this code into the system. This is how you can use the Popen() method within that library to execute CLI commands:

You will paste this into your file.

import subprocess

def execute_command(command):


Execute a CLI command and return the output and error messages.


– command (str): The CLI command to execute.


– output (str): The output generated by the command.

– error (str): The error message generated by the command, if any.



# execute the command and capture the output and error messages

process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

output, error = process.communicate()

output = output.decode(“utf-8”)

error = error.decode(“utf-8”)

# return the output and error messages

return output, error

except Exception as e:

# if an exception occurs, return the exception message as an error

return None, str(e)

With the Popen(), it will execute the command as a string in a new process within the operating system. Within this is the shell=True option. This helps to ensure the method will use the default shell that is configured within your operating system.

Once you do this, you can then invoke the CLI command within Python. To do that, try out this example:

output, error = execute_command(“<CLI command string>”)

if error:

print(“An error occurred while running the CLI command:”, error)


print(“CLI command output:”, output)

How to Use Wget with Python

use wget in python and others

Now, let’s work through some use cases for Wget Python and how you can try them out.

Start with the Wget command, which is:

Wget [options] [url]

In this command, the [options] refers to the list of options and flags that are supported by the CLI tool. This is what will customize the behavior of the tool.

The [url] refers to the URL of the file that you wish to download. With this prompt, it can be a direct link to the file if you would like. It can also be the URL to a webpage that has a link (or more than one link) to multiple files.

Wget in Python can be quite interesting (and for someone who likes to write code, it can be fun to try out some steps). In all cases, these are the most important commands that you will likely need to use over time.

How to download a file in Wget

Let’s say that you want to do the most basic of things – you want Wget to download a single file. You want it to then be stored in your current working directory. Let’s say you want to do so with the most up-to-date version of WordPress, for example.

In this process, you will enter the following into the command line:


When you do that, you will see the following script:

–2024-05-23 12:00:10–

Resolving (…

Connecting to (||:443… connected.

HTTP request sent, awaiting response… 200 OK

Length: 9332728 (8,9M) [application/zip]

Saving to: ‘’ 100%[===================>] 8,90M 6,88MB/s in 1,3s

2024-05-23 12:00:14 (6,88 MB/s) – ‘’ saved [9332728/9332728]

When you do this, you will see a file that is named in your current working directory. Typically, you will also see other information here. That includes the speed, the download progress, as well as the time, date, and size. That information could be useful in some applications.

How to use Wget to download more than one file

This is a nice benefit to Wget, so you are sure to use it, especially if you are doing any type of web scrapping. You can use it to download multiple files at one time.

In order to facilitate this, you need to create a text document. Within the text document, you will need to place each of the download URLs.

Here is an example using the same premise as above, but we want to get the latest updates for Drupal, Joomla, and WordPress. (You may need to do that at some point, right?)

To do this, we start at the top:

Nano example.txt

When you do that, it creates the example.txt file. It then opens the text editor interface for you. Now, with that editor open, you can then paste in what you want Wget to do for you. Here is what to paste when you want to incorporate the updates to those three programs, as in our example:

It’s a bit more complex than what you did in the first step but still highly simplistic overall, right?

Note that you can use the -i to obtain all of the files stored in your example text file. To do that, you would type

wget -i example.txt

It may take a moment, but you will have all of the updated versions of these three programs in place and ready to go within your directory.

Remember that you can follow this same process for numerous other steps you need to take. You just need to modify the command line to match anything you are doing.

To help you, let’s break down some of the other frequently used methods to get some more details.

How to Use Wget Command to Get Files in a Different Name

One of the tasks you may need to do from time to time is to get files under different names. To do this, you will need to use the -O option.

Let’s follow the same goals as before to download the latest WordPress updates. We would enter the following:

wget -O

Because of this prompt, the downloaded files will be placed in the That means it will not go into its original name.

The -O component is a key step – ite makes unzipping the archives in Linux efficient. The result is you do not have to type all of that out every time.

How to use Wget to Save Files in Specific Directories

This is another common step most people will need to use from time to time. Let’s say you need to place your file into a specific directory. You can do that using the -P function. To do that, you will need to type the following, for example, into your prompt:

wget -P documents/archives/

When you do that, you get the files you have downloaded, in this case the WordPress updates, within the documents/archives folder on your system.

How Do You Download a Webpage Using Wget?

webpage download using wget

Another step that you may wish to take is to download a web page utilizing Wget, which is a very popular process overall.

In this case, you just need to change up the URL. Instead of pointing to files, it will point to a specific web page.

Your input, then, for example, would be:


When you do this, you will get:

output, error = execute_command(“wget“)

In this situation, the project’s directory will have the index.html file within it and the HTML content from the website you placed within it. In this case, it will have the content that is on the website.

This is one of the key reasons why these tools are so easy to use. Just knowing a few details allows you to change up just about anything you need.

How Do You Download a File in Wget Only When There Are Noted Changes?

download files when you need changes

One of the benefits of using Wget is that you can do just that—you can tell the system to update the file as you would like to, in this case, only if the file has been updated since the last time you downloaded it. This saves you time and bandwidth overall.

To save on disk space, then (not to mention all of the slowdowns in your network resources) you can follow a few simple steps.

Wget provides file timestamping capabilities that can apply in this situation. That means that it can figure out when the last timestamp is and update just when there is a need to do so.

To explain that a bit better, consider the following.

The -timestamping option is used to help you to compare the timestamp of a local file to the one that is currently on the website’s server. It allows Wget to determine if there has been an update based solely on the timestamp information.

If the local file on your directory has the same timestamp (or in some cases a newer timestamp) then the tool does not take the time and use the resources to download that file again.

If the local file on your directory has an older timestamp than what is listed on the server, then it will download the file again. It is very intuitive and will not take much time to do this.

This is how the process works in practicality:

  • You download a file using the -timestamp or the -N option.
  • Wget goes to work to get the timestamp of the remote file.
  • Wget then checks out the local file’s timestamp, which is the one that is in your directory.
  • It compares the timestamp with the remote file’s timestamp based on what is on the server.
  • In situations where your local directory timestamp is older than the one that is located on the remote server, Wget recognizes this. It then makes the decision to download the file.
  • In situations where that local directory timestamp is newer or the same as the one on the remote server, Wget recognizes that. It then does not move forward with downloading another version of the same thing.

It’s pretty useful overall.

To implement this, it is time to check the Last-Modified header returned by the server when the HEAD request is made.

Also, note that it is possible to have Wget look at the Content-Length header. This can also provide some insight because it will compare the file’s sizes in the same way. In situations where there has been a significant change in the file size, Wget recognizes this inaccuracy. In this situation, Wget will download the file no matter what the Last-Modified header returns.

Make sure to consider that, if you want Wget to do either of these steps, you have to tell it to. That’s because the Last-Modified is an optional response header. If you do not provide information related to this, then it just will download the file no matter what.

Now, what do you do if you want to use this feature? Input the -timestamping option into Python. You can do that now by trying it out like this:

output, error = execute_command(“wget –timestamping”)

Let’s say you do this. And the system recognizes that there’s no update or difference from the last file you downloaded. That’s great. You’re up to date. You will then see a message like the one below showing that:

–2024-03-18 11:55:16–

Resolving (…,

Connecting to (||:443… connected.

HTTP request sent, awaiting response… 304 Not Modified

File ‘index.html’ not modified on server. Omitting download.

How Do You Use Wget to Limit the Download Speed?

Another way to use Wget in Python is to limit the download speed. This can be an important step in situations where you want to reduce the demand on your network resources.

The good news is that it’s super easy to do (of course!) If you are using the tool to download a large file or multiple large files, this process will prevent the system from using all of the network bandwidth to do so, allowing you to meet other goals as you do.

Here is an example of how you may do this. Let’s say you want to limit the process to no more than 500K of your bandwidth. You would enter the following:

wget –limit-rate=500k

How Do You Use Wget to Download in the Background?

Along with that step, there may be times when you want to use Wget to download, but you do not want it to operate openly. That is, you want it to download in the background while you tackle other tasks.

This tends to be an important step in situations where you have a very large file. To do this, you need to use the—b function with Wget. When you put in the following prompt, it will start updating, but it will do that with limitations.

wget -b

That is going to do the work for you without any type of interruption in what you are doing. What you see will be interesting as well.

A Wget-log will show up within your working directory. This allows you to then check on the download progress over time. It gives you the status of the download.

How Do You Get Wget to Set the Number of Retry Attempts?

There are numerous reasons why you may wish to use this process (and most of us have done so at some point in one way or another). For example, let’s say you are having connectivity concerns. When this happens in the middle of downloading a file, it can send your process back steps and steps, and it can be rather frustrating.

With this specific instruction, though, you can provide Wget with some instruction on what to do. For example, you can increase the number of retry attempts it will use so that it continues to try again (until the number you set) keeping your project on task.

To do this, you need to use the -tries function. Here’s an example of how to do that:

wget -tries=50

In this example, you are instructing Wget to keep at it for at least 50 tries before it gives up and stops trying to download. That is going to help you through most internet connection problems in most situations.

By default, Wget will make attempts up to 20 times if you do not specify a number of tries beyond that.

How Can You Get Wget to Complete Interrupted Downloads?

Let’s say that, for some reason, there is an interruption in the process. When the tries runs out, what happens next? If the connection is lost during the downloading process and it reaches the limit you put in (50 in the above example), it will stop. You have the ability, though, to manually continue the partially downloaded file. You will need to use the -continue or -c option to tell it to do this.

wget –continue

How Do You Get Wget to Download or Retrieve a Whole Website?

download website entire

Let’s really put Wget to work. What if you want to download the entire website? Going page by page or file by file is simply cumbersome. You do not have to do that, though. With Wget, you can download the content of an entire site in just a few steps. Once you do, you can then see that website in your local directory even when you do not have an internet connection.

There are various commands and functions you need to provide, depending on what you want to accomplish. Let’s break down what you are likely to use in this scenario:


This function makes the download recursive.


This function will allow all of the links on the website to be converted for offline usage.


This function will tell the tool that what follows are all of the necessary files needed. This could include, for example, images, JS, and CSS files.


This function provides specific direction to the tool, not to include directories that are above the hierarchy (those will not be retrieved).

-P documents/websites/

This is the directory that you will specify within the prompt. This tells it where to put the copy of the website so you can use it later.

So, what do you type into the command for this? Try out this command and replace the example with a website you want to copy:

wget –mirror –convert-links –page-requisites –no-parent -P documents/websites/

Once you do that, you will see that all of the website pages are downloaded and available in your local directory, which you can find under documents/websites/ within the folder you specify.

How Can You Get Wget to Download with a Proxy?

download files with wget using proxy

There are many situations where you want or need to gather data from a website but you find yourself unable to do so because you are blocked from the requests. This is not uncommon, but there is a way to work around it.

The challenge of using Wget to download an entire sight like above is that it is likely to trigger a block for you. When you submit the request, the destination server sees that this action is coming from a bot. As a result, it will likely automatically block the attempt.

Many websites have restrictions like this put in place or they may have some level of limitation on where you can access it. For example, it may not allow you to have direct access to some pages. It may limit the resources it offers. Other items, the destination server will not allow requests from specific geographic regions to be available. There are also various other methods they may employ, including steps to prevent web scraping.

Regardless of why this happens, there is a way to work around it and that is to use a proxy server.

You can integrate a proxy server into the Wget to bypass the restrictions outright. If you want to use Python script do download files from a website and get a block, you can incorporate a poxy that allows you to hide your IP address. That means the Python Wget download is not likely to be blocked.

You will need to know what the restrictions are, such as if there is a restriction on your location. Then, you can set up a proxy to facilitate the process for you. The proxy will work as an intermediate server that sits between your computer and the internet. All of the requests will go through this server instead of going directly to the destination server. This protects your IP address and allows you to get around nearly all of the impositions in place.

Try Our Residential Proxies Today!

Should You Be Using Python Wget Connection?

conclusion on download file and webpages with wget

There are a lot of additional features and steps you can use to make the most out of Wget in Python, including using Wget in Pycharm and other tools. Use the previous Python Wget example to help you try out any of the actions you wish to take. When you do, know that we can help you get around the restrictions you may face.

To do so, check out how Rayobyte works. As a tool that allows you to access a proxy to get around just about any restrictions, it is one of the best resources available to you for getting the job done without striction.

You can tap into Wget with Python now to learn more. We also encourage you to check out how Rayobyte works and how it can provide you with the support you need. It may be easier than you realize to get the results you want. Check out how to make a web crawler in Python next, or just reach out to our team to learn more about what we can do for you.

The information contained within this article, including information posted by official staff, guest-submitted material, message board postings, or other third-party material is presented solely for the purposes of education and furtherance of the knowledge of the reader. All trademarks used in this publication are hereby acknowledged as the property of their respective owners.

Table of Contents

    Kick-Ass Proxies That Work For Anyone

    Rayobyte is America's #1 proxy provider, proudly offering support to companies of any size using proxies for any ethical use case. Our web scraping tools are second to none and easy for anyone to use.

    Related blogs

    curl get request in php
    python requests proxy
    udp proxy