Integrating a Playwright Proxy (and How To Use It)

What is the Playwright proxy? It’s essentially an extended version of Puppeteer. It’s an end-to-end API letting you automate both headless and headed browsers, such as Chrome, Firefox, and IE via Webkit. It enables you to automate web apps, testing functions, and web scraping.

You can use Playwright in Javascript, Python, C#, and Java. Several of its features make it resilient, versatile, and highly compatible with proxies. So, how do you set up your Playwright proxy? Let’s understand doing it with Node.js and Python, some advantages of using Playwright, and leveraging it to make the most of your proxies.

So, whether you’re wondering, “What is Playwright in testing?”, “What is the Playwright system used for?”, or need some other information on how best to use this tool, this guide has the answers you need. We’ll cover how to handle Playwright proxy authentication and give you a Playwright proxy example to show you how to use it. And if you just need some specific information concerning your Playwright proxy, you can use the table of contents to navigate to the section that interests you most if you’d like.

 

Try Our Residential Proxies Today!

 

What Is Playwright Used For?

What Is Playwright Used For?

An end-to-end testing automation tool, the developers at Microsoft released Playwright in 2020 to execute testing complex projects more efficiently. Playwright’s speed makes it especially useful for testing larger projects, but it can also be used for web scraping with proxies.

Flexible, resilient, and powerful, Playwright has multiple features that make it well-suited for testing automation. Its capabilities can bring a number of benefits to your web scraping project. So, here’s a breakdown of Playwright’s features and benefits.

Flexibility

Playwright runs on any platform and any browser all from a single API. It’s highly compatible with whatever environment it interacts with:

  • Browsers. Whether it’s Google Chrome, WebKit, Mozilla Firefox, or Edge, Playwright works with any headed or headless browser.
  • Platforms. Playwright can test your software on Linux, Windows, or MacOS and can work locally or on CI.
  • Languages. Playwright can be used with JavaScript, TypeScript, Python, .NET, and Java. So, it can be tailored to whatever language your software is in.
  • Mobility. Playwright can natively emulate Google Chrome for Android or Mobile Safari, and its rendering engine works both on desktops and in the Cloud.

Playwright’s flexibility allows you to run your software testing or web scraping projects under various conditions. Whatever your testing environment looks like, Playwright provides an end-to-end test automation solution for any configuration — and Playwright with proxy usage is an excellent combination for web scraping.

Resilience

With such high demands being placed on some software applications today, their ability to run under a challenging set of conditions is vital. Playwright has multiple features letting you operate your resiliency tests, such as:

  • Auto-wait. Faulty tests are often caused by artificial timeouts, and Playwright is designed to keep these to a minimum. It has a wide number of introspection events to monitor its own functions, and it waits to perform its actions until the necessary elements are actionable. That reduces timeouts and results in more accurate testing.
  • Web-first assertions. Its assertions are designed specifically for the dynamic web, and checks are automatically retried until the necessary conditions are met. That ensures you’ll always get a result for every condition you’re checking.
  • Tracing. Playwright has multiple features that let you identify where your software malfunctions. Some of them include test retry strategy configuration, capturing your execution trace, and taking videos and screenshots to depict the error that occurs. These help eliminate flakes.

The transparency and auto-wait functions that Playwright offers are ideal for not only assuring that your tests avoid the most common pitfalls but also helping you identify faulty tests should they occur — keeping them at a minimum. That way, you can build a product that can withstand the most extreme software conditions.

Context

The content that web browsers run may have different origins and run through different processes. Playwright aligns with the architecture of a modern browser and tests out-of-process. This removes many of the in-process test runner limitations that other automation tools may have. This structure gives better context and is observable in several features, like:

  • Multiplicity. Playwright’s test scenarios can range across multiple tabs, have multiple origins, and function with multiple users. Testers can design scenarios with different contexts all with different users, and they can be run on your own server, all in a single test.
  • Trusted events. From hovering elements and interacting with dynamic controls to producing trusted events, the browser input pipeline that Playwright uses takes the exact same form as that of an actual user.
  • Browser contexts. With Playwright, you can create a unique browser context for every test you run. It’s the equivalent of a whole new profile. It also lets you fully isolate your tests with no overhead — setting up a new one only takes milliseconds.

Another advantage of Playwright’s contextual features is its single log-in requirement. It allows you to save the context’s authentication state and lets you reuse it in every test. That is, you can skip the repetitive log-in operations required for each test without sacrificing independent isolation.

Tooling

Playwright has several different tools that allow you to test your products more efficiently:

  • Codegen, which lets you generate tests by recording your actions — and it works for all languages.
  • Playwright inspector, which lets you generate cycles, navigate the test execution, view click points, inspect pages, and explore execution logs.
  • Trace Viewer. which lets you take in all the data surrounding your test code. Capabilities include live DOM snapshots, test execution screencast, action explorer, and more.

These tools offer the functionality that testers need to ensure their software can perform as intended, but they also lend themselves well to web scraping.

Playwright, Selenium, and Puppeteer: How They Compare

Playwright, Selenium, and Puppeteer: How They Compare

Playwright can be a great tool both for test automation and web scraping. However, it’s not the only one out there. It’s also not always the best tool for the job.

Before you get started on your web scraping project, you need to know the advantages and disadvantages of each tool. So, you can use the one that aligns with your environment. Two similar products are Puppeteer and Selenium. While they have plenty of similarities and can all be used for web scraping, there are enough differences between them that choosing the right one can have a significant impact on your project.

Puppeteer

For example, Puppeteer performs its work the fastest, but its scope is more limited than the other two. Puppeteer only works with Google Chrome and can only be written with JavaScript so if your environment requires other languages or browsers, you’ll need to use a different tool.

Selenium

We’ve written more about configuring a proxy with Selenium here, but this tool also has its pros and cons. Selenium can work with most programming languages (JavaScript, Java, C#, Perl, Kotlin, Dart, Ruby, and R) and most browsers (Chrome, Firefox, Edge, Safari, Opera, and more), but test results have shown that it’s slower than Playwright and Puppeteer. It’s been around longer making it well-established, and its large and active community has many resources to offer. However, it’s usually better for smaller, less complex scraping projects.

Playwright

Landing somewhere in between the two, Playwright attempts to offer the strengths of both while mitigating their weaknesses.

We’ve already mentioned the specific languages and browsers that it supports. So, we won’t belabor the point, but its flexibility and speed enable it to handle more advanced scraping work, even if its community is smaller and offers slightly fewer resources. The asynchronous design and resilience features allow for faster tests and proxy work and remove some of the hurdles that many web scraping projects encounter. The developer experience is usually considered more user-friendly.

Ultimately, there’s no one-size-fits-all tool for testing or proxy use. So, it’s best to consider your needs. This table recaps the differences between the three:

Puppeteer Playwright Selenium
Speed Fast Fast Medium
Programming Languages JavaScript JavaScript, TypeScript, .NET, Java, Python JavaScript, Java, C#, Kotlin, Python, Ruby, R, Dart
Browser Support Chromium Chromium, Firefox, WebKit Chrome, Firefox, IE, Edge, Opera, Safari
Sponsors Google Microsoft Community of developers
Documentation Excellent Excellent Good
Community Small but active Small but active Large and active

Consider these factors when you decide which tool is best for your application. Then, choose which tool fits best. If you need high speed and will only be using your proxies to scrape on Chromium browsers with JavaScript, Puppeteer could work well. If you have a simpler project and plan to scrape multiple browsers in the most languages possible, then Selenium could work fine. However, if you’re working with the primary headless browsers and need speed while scraping multiple pages at once, Playwright may be the better choice.

Configuring Your Playwright Proxy Settings

Configuring Your Playwright Proxy Settings

Once you’ve decided that Playwright is the tool you need, you can begin to configure your Playwright proxy. You’ll first need to start off with a text editor and Node.js. We’ll show you how to set up your Playwright proxy first with Node.js, and afterwards with Python. Then, we’ll show you how to scrape different elements.

Node.js

If you’re using Node.js, you can create a new project and install the Playwright library with these two commands:

npm init -y

npm install playwright

If you were scraping a dynamic webpage for financial analysis on an e-commerce site such as Amazon, a basic script might look like:

const playwright = require(“playwright”)

(async() =>{

for (const browserType of [‘chromium’, ‘firefox’,  ‘webkit’]){

const browser = await playwright[browserType].launch()

const context = await browser.newContext()

const page = await context.newPage()

await page.goto(“https://amazon.com”)

await page.wait_for_timeout(1000)

await browser.close()

}

})

There are several observations to make about this text. The first line imports the Playwright library and then launches multiple browser instances — Chromium, Firefox, and WebKit. The Playwright.config file lets you configure which browsers it will run on, but it will operate on all three by default. A new browser page opens after the context is established, and the page.goto() function opens up the desired page, in this case, Amazon. After displaying the one-second wait time, the browser window closes.

If you need to create multiple browser contexts, you can create multiple context objects with multiple pages within each object. The code for this would look like:

const context = await browser.newContext()

const page1 = await context.newPage()

const page2 = await context.newPage()

Python

If you’re working in Python, first install the Python library using the “pip” command, along with any necessary browsers with the “install” command. After that, the code is simply:

python -m pip install playwright

playwright install

After that, an example script would be:

from playwright.async_api import async_playwright

import asyncio

async def main():

async with async_playwright() as p:

browser = await p.chromium.launch(headless=False)

page = await browser.new_page()

await page.goto(‘https://amazon.com’)

await page.wait_for_timeout(1000)

await browser.close()

Although similar to the Node.js code, several important differences exist. First, remember that Playwright can support synchronous and asynchronous functions — which is part of its superior speed and resilience. This Python code uses the “asyncio” library for asynchronous frameworks.

The browser instance is also for a headed Chrome, as indicated by “headless=false”. You can switch to the headless mode by changing this to “headless=true.” You can use the function “page.context()” to obtain the browser context within your code.

Server IPs, Usernames, and Passwords

After you install the Playwright framework, you’ll need to edit some of the code within the Playwright.config.ts file, and how you edit it will depend on the type of proxies you use.

First, you’ll need to edit the server field with the IP address and port that corresponds to the proxy you’ll be using with Playwright. The IP address and port should be separated by a colon.

Next, enter the username and password that the proxy requires. If your proxy is free, you won’t need a username or password, but private proxies will have them. The server field will also vary depending on whether you use residential or data center proxies. Some data center proxies are dedicated and some are not, and this also impacts the content of the server field.

An example script would be:

const playwright = require(‘playwright’);

(async () => {

for (const browserType of [‘chromium’, ‘firefox’, ‘webkit’]) {

const browser = await playwright[browserType].launch({

headless: false,

proxy: {

server: “123.45.67.89:54321”,

username: ‘USERNAME’,

password: ‘PASSWORD’

},

});

const context = await browser.newContext();

const page = await context.newPage();

await page.goto(WEBADDRESS);

await page.screenshot({ path: `${browserType}.png` });

await browser.close();

}

})();

The Playwright library offers extensive documentation. So, there are plenty of sources to consult concerning each browser instance’s devices, errors, requests, and selectors. Consult these for information on how to configure Playwright for each browser type if you need to change your code further.

Using Your Playwright Proxy

Using Your Playwright Proxy

With your Playwright proxy configured, you can then begin to use it for your scraping project. We’ll continue with the Playwright proxy example of scraping an e-commerce site to show some applications.

Locating Elements

Before you can scrape a web page, you must first locate its elements. This involves the use of XPath and CSS selectors. You can identify the number of elements within the page via the “div” function, which you’ll need to select the elements later on. You’ll have to run a loop over the “div” elements for that.

Once you’ve identified the number of elements, Node.js has several functions that you can use to operate on the selectors — though Python’s functions work slightly differently. Some of the most common Node.js functions are:

  • querySelector(selector), which returns the first element.
  • querySelectorAll(selector), which returns each element.
  • $eval(selector, function), which selects the first element, sends it to the function, and returns the function’s result.
  • $$eval(selector, function), which selects each element, sends them all to the function, and returns the result for each.

For the Amazon example, some elements may include images of the most popular products, while others could be text like pricing data or ratings. The type of element it is will dictate how it will be scraped.

Scraping Text

Once you’ve located the element you’re looking for, you can scrape the text or image accordingly. However, the way you scrape them may vary depending on if you’re using Node.js or Python. The Amazon example can show the differences.

Node.js

If you need to extract data from the text of a web page’s products, you could use the $eval or $$eval functions to do so. An example script would look like:

const products = await page.$$eval(‘.a-spacing-base’, all_products => {

// run a loop here

})

To extract the elements in a loop, an example script is:

all_products.forEach(product => {

const title = product.querySelector(‘span.a-size-base-plus’).innerText

})

You can then use the “innerText” attribute to extract your text data from each data point, with the final Node.js code sample being:

const playwright = require(“playwright”)

(async() =>{

for (const browserType of [‘chromium’, ‘firefox’,  ‘webkit’]){

const launchOptions = {

headless: false,

proxy: {

server: “IPADDRESS”,

username: “USERNAME”,

password: “PASSWORD”

}

}

const browser = await playwright[browserType].launch(launchOptions)

const context = await browser.newContext()

const page = await context.newPage()

await page.goto(‘https://www.amazon.com/b?node=17938598011’);

const products = await page.$$eval(‘.a-spacing-base’, all_products => {

const data = []

all_products.forEach(product => {

const title = product.querySelector(‘span.a-size-base-plus’).innerText

const price = product.querySelector(‘span.a-price’).innerText

const rating = product.querySelector(‘span.a-icon-alt’).innerText

data.push({ title, price, rating})

});

return data

})

console.log(products)

await browser.close()

}

})

Python

Instead of Node.js’ $eval and $$eval, Python’s functions are “query_selector” and “query_selector_all.” They serve the same purpose as the corresponding Node.js functions by returning a single element or list of elements, respectively. An example script would be:

import asyncio

from playwright.async_api import async_playwright

async def main():

async with async_playwright() as pw:

browser = await pw.chromium.launch(

proxy={

‘server’: ‘IPADDRESS’,

‘username’: ‘USERNAME’,

‘password’: ‘PASSWORD’

},

headless=False

)

page = await browser.new_page()

await page.goto(‘https://www.amazon.com/b?node=17938598011’)

await page.wait_for_timeout(5000)

all_products = await page.query_selector_all(‘.a-spacing-base’)

data = []

for product in all_products:

result = dict()

title_el = await product.query_selector(‘span.a-size-base-plus’)

result[‘title’] = await title_el.inner_text()

price_el = await product.query_selector(‘span.a-price’)

result[‘price’] = await price_el.inner_text()

rating_el = await product.query_selector(‘span.a-icon-alt’)

result[‘rating’] = await rating_el.inner_text()

data.append(result)

print(data)

await browser.close()

if __name__ == ‘__main__’:

asyncio.run(main())

The output for both the Node.js and Python code should be the same.

Scraping Images

Both entail using the Playwright wrapper, but whether you’re using Node.js or Python, there are a few differences between scraping text and scraping images.

Node.js

Multiple methods exist for extracting images with the Playwright wrapper, but two libraries work particularly well: https and fs. They allow you to make web requests. So, you can download the images and store them in your current directory. A sample code would be:

const playwright = require(“playwright”)

const https = require(‘https’)

const fs = require(‘fs’)

(async() =>{

const launchOptions = {

headless: false,

proxy: {

server: “IPADDRESS”,

username: “USERNAME”,

password: “PASSWORD”

}

}

const browser = await playwright[“chromium”].launch(launchOptions)

const context = await browser.newContext()

const page = await context.newPage()

await page.goto(‘webpage’);

const images = await page.$$eval(‘img’, all_images => {

const image_links = []

all_images.forEach((image, index) => {

const path = `image_${index}.svg`

const file = fs.createWriteStream(path)

https.get(image.href, function(response) {

response.pipe(file);

})

image_links.push(image.href)

})

return image_links

})

console.log(images)

await browser.close()

})

Once you use the $$eval function to extract each image element, you can use the “forEach” function to iteratively loop over each element. Within this loop, the index and image path let you construct the image name. You can then call the “createWriteStream” method within the fs library to initiate a file object, and you can use the https library to send a “GET” request that downloads the image elements.

When the code is executed, the script will loop through each image on the page and download it into your directory. The example code is:

all_images.forEach((image, index) => {

const path = `image_${index}.svg`

const file = fs.createWriteStream(path)

https.get(image.src, function(response) {

response.pipe(file);

})

Python 

Python’s built-in I/O support makes the task of scraping images much easier than in Node.js. You can use the Playwright wrapper just as you would with Node.js, and the “query_selector_all” function works for image elements just as it does with text.

Once it extracts all the image elements, the script sends a “GET” command to each source URL where it will store the responses in the current directory. A sample script is:

from playwright.async_api import async_playwright

import asyncio

import requests

async def main():

async with async_playwright() as pw:

browser = await pw.chromium.launch(

proxy={

“server”: “IPADDRESS”,

“username”: “USERNAME”,

“password”: “PASSWORD”

},

headless=False

)

page = await browser.new_page()

await page.goto(‘webpage’)

await page.wait_for_timeout(5000)

all_images = await page.query_selector_all(‘img’)

images = []

for i, img in enumerate(all_images):

image_url = await img.get_attribute(“src”)

content = requests.get(image_url).content

with open(“image_{}.svg”.format(i), “wb”) as f:

f.write(content)

images.append(image_url)

print(images)

await browser.close()

if __name__ == ‘__main__’:

asyncio.run(main())

 

Try Our Residential Proxies Today!

 

Final Thoughts

Final Thoughts

Playwright is a powerful tool for both automated testing and web scraping projects — especially when used with ethically sourced proxies. Once you know how to configure your Playwright proxy settings, you can use your proxies for testing, locating elements, scraping texts, images, and more. You can also intercept http requests, which despite being more advanced than basic web scraping, can be useful for debugging or for performance optimization.  The key is to have quality proxies.

At Rayobyte, we offer residential and data center proxies that can empower your web scraping projects. They can help you navigate past CAPTCHAs and avoid being banned, making it easy to gain the data your business needs to thrive. We know that proxies and web scraping can be complicated subjects, which is why we offer quality proxies and even better support. Contact us today and let proxies enhance your Playwright scraping projects.

The information contained within this article, including information posted by official staff, guest-submitted material, message board postings, or other third-party material is presented solely for the purposes of education and furtherance of the knowledge of the reader. All trademarks used in this publication are hereby acknowledged as the property of their respective owners.

Table of Contents

    Kick-Ass Proxies That Work For Anyone

    Rayobyte is America's #1 proxy provider, proudly offering support to companies of any size using proxies for any ethical use case. Our web scraping tools are second to none and easy for anyone to use.

    Related blogs

    How to change your static IP address
    How to change your IP address
    How to Change IP Address to Another Country
    IP subnetting