How to Use a Proxy in Selenium v4 – Python (Ultimate Guide)
Trying to use a proxy in Selenium v4? You’re in the right place!
I’ve spent hours figuring out how to use a proxy in Selenium with a Chrome driver for one of my web scraping projects.
Google search results were full of outdated tutorials with code that doesn’t work. So, I went into the depths of the latest Selenium documentation, and I finally figured it out. Now, I present to you a Selenium proxy tutorial with Python code that actually works.
Notes
- I am using the Chrome driver in this tutorial.
- If you’re not sure what’s a proxy, check out my What is a Web Scraping Proxy? guide.
Selenium is a browser automation tool that is used to build bots, test web applications and scrape data from the internet. Its Python library is a popular choice for web automation and web scraping.

Note: If you are using the Firefox Webdriver, checkout my How to Use a Proxy in Selenium v4 with Firefox & Python tutorial.
How to Use a Proxy in Selenium
For experienced developers, here is a quick and short answer for how to use a Proxy in Selenium with the chrome driver:
import time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
PROXY = "http://207.2.120.19:80"
options = Options()
options.add_argument(f'--proxy-server={PROXY}')
driver = webdriver.Chrome(
options=options
)
driver.get("http://httpbin.org/ip")
print(driver.find_element(By.TAG_NAME, "body").text)
time.sleep(5)
driver.close()
The output should be as follows:

That’s it, you are now using a proxy with selenium. If the preceding code is unclear, or you can’t get it to work, then carefully read the full detailed guide below.
Step 01 – Getting Selenium to Work
Previously, you had to download a chromedriver
web driver for Selenium to work properly. With Selenium version 4, you no longer need to install a web driver.
However, if Selenium is not working for you, you may still need to download the chromedriver
. To do so, follow these steps:
- Go to Chrome’s Downloads Page
- Select your Chrome’s version.
- If you are using Chrome version 115 or newer, please consult the Chrome for Testing availability dashboard.
- Select the
chromedriver
that’s suitable for your OS from the table, download it, then add it to yourPATH
.
Make sure to run the chromedriver
and your Chrome or Chromium browser at least once before proceeding. This step may not be necessary for some users:
$ chromedriver
Now, create a new Python folder called selenium-proxy
:
mkdir selenium-proxy
Create a new Python virtual environment inside it:
python -m venv env
Activate the virtual environment:
source env/bin/activate
Install selenium:
pip install selenium
Next, add a file called scraper.py
and then add the following code to it:
# scraper.py
import time
from selenium import webdriver
driver = webdriver.Chrome()
driver.get("https://www.python.org")
print(driver.title)
print(driver.current_url)
time.sleep(5)
driver.close()
The preceding script opens the Python website using a Chrome WebDriver, prints the title and URL of the page, pauses for 5 seconds, and then closes the browser.
Here is a more detailed overview:
- Import Libraries:
time
: Used for adding delays in the script.webdriver
from theselenium
library: Provides a common interface for interacting with web browsers. In this case, it’s using the Chrome WebDriver.
- Instantiate WebDriver:
driver = webdriver.Chrome()
: This line creates an instance of the Chrome WebDriver, allowing the script to control and automate the Google Chrome browser.
- Navigate to a URL:
driver.get("https://www.python.org")
: Instructs the WebDriver to open the specified URL, in this case, the official Python website.
- Print Page Title and URL:
print(driver.title)
: Outputs the title of the current web page opened by the WebDriver (in this case, the Python website).print(driver.current_url)
: Outputs the URL of the current web page.
- Pause Execution (Sleep):
time.sleep(5)
: Introduces a 5-second delay in the script’s execution. This allows you to see the website loaded on chrome, instead of being closed right away.
- Close the Browser:
driver.close()
: Closes the browser window that the WebDriver is controlling.
Now, run scraper.py
:
python scraper.py
You should see the following output:
Welcome to Python.org
https://www.python.org/
This means Selenium and chromedriver
are working properly.
Step 02 – Using a Proxy with Selenium
Now, let’s move on to how to use a proxy in your python selenium code.
Inside your selenium-proxy
project folder, create a new file called proxy.py
, and add the following code to it:
import time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
PROXY = "http://207.2.120.19:80"
options = Options()
options.add_argument(f'--proxy-server={PROXY}')
driver = webdriver.Chrome(
options=options
)
driver.get("http://httpbin.org/ip")
print(driver.find_element(By.TAG_NAME, "body").text)
time.sleep(5)
driver.close()
Let’s break down the code step by step:
1. Import Libraries:
import time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
Here are the imports:
time
: Provides time-related functions for introducing delays.webdriver
: From the Selenium library, used for automating web browser interactions.Options
: A class fromselenium.webdriver.chrome.options
to configure Chrome WebDriver options.By
: A class fromselenium.webdriver.common.by
to specify the mechanism used to locate elements on a webpage.
2. Define Proxy Information:
PROXY = "http://207.2.120.19:80"
Sets a proxy server address (http://207.2.120.19:80
). This is used to route the web traffic through a specified proxy server.
3. Configure Chrome WebDriver with Proxy:
options = Options()
options.add_argument(f'--proxy-server={PROXY}')
- Creates an instance of
Options
class to configure the Chrome WebDriver. - Adds a command-line argument to specify the proxy server for the Chrome WebDriver.
4. Initialize WebDriver:
driver = webdriver.Chrome(options=options)
Creates a Chrome WebDriver instance with the configured options.
5. Navigate to a URL:
driver.get("http://httpbin.org/ip")
Directs the WebDriver to open the specified URL (http://httpbin.org/ip
).
6. Find and Print Page Content:
print(driver.find_element(By.TAG_NAME, "body").text)
Finds the <body>
element of the webpage and prints its text content. By.TAG_NAME
is used to locate the element by its HTML tag name.
7. Pause Execution (Sleep):
time.sleep(5)
Introduces a 5-second delay so that the Chrome browser doesn’t close right away.
8. Close the Browser:
driver.close()
Closes the browser window controlled by the WebDriver.
Output
The output should have your proxy’s IP address instead of your local machine’s IP address:
{
"origin": "207.2.120.19"
}
Step 03 – Selenium Proxy Authentication with Python in Chrome

If you want to use a proxy that requires authentication with a username and password. The simplest way to achieve this is by using the selenium-wire
extension.
Selenium Wire extends Selenium’s Python bindings to give you access to the underlying requests made by the browser. You code in the same way as you do with Selenium, but you get extra APIs for inspecting requests and responses. Selenium Wire also makes it really easy to use both unauthenticated and authenticated proxies in Selenium.
To install Selenium Wire:
pip install selenium-wire
Here is how to use an open unauthenticated proxy with Selenium Wire:
import time
from seleniumwire import webdriver
PROXY = "45.77.194.89:38869"
options = {
'proxy': {
'http': f'http://{PROXY}',
'https': f'https://{PROXY}',
'no_proxy': 'localhost,127.0.0.1'
}
}
driver = webdriver.Chrome(seleniumwire_options=options)
driver.get('http://httpbin.org/ip')
time.sleep(5)
driver.close()
As you can see, you can just specify the IP address and port, pass the options to a seleniumwire_options
parameter, and that’s it!
To use proxy authentication with Selenium Wire, you can just pass the username
and password
of your proxy server in the options
dictionary like so:
import time
from seleniumwire import webdriver
options = {
'proxy': {
'http': 'http://USERNAME:PASSWORD@PROXY_SERVER:PORT',
'https': 'https://USERNAME:PASSWORD@PROXY_SERVER:PORT',
'no_proxy': 'localhost,127.0.0.1'
}
}
driver = webdriver.Chrome(seleniumwire_options=options)
driver.get('http://httpbin.org/ip')
time.sleep(5)
driver.close()
If your proxy uses the Bearer
scheme, you can pass your authorization token in a Proxy-Authorization
header using the custom_authorization
option:
options = {
'proxy': {
'https': 'https://192.168.10.100:8888', # No username or password used
'custom_authorization': 'Bearer mytoken123' # Custom Proxy-Authorization header value
}
}
Conclusion
Congrats, you now know how to use a proxy in Python Selenium v4.0 with a Chrome web driver. You also learned how to use proxy authentication in Selenium for proxy servers protected with a username and password.
Sources
FAQs
Why use a proxy with Selenium?
Using a proxy with Selenium offers several benefits:
- IP Anonymity: Proxies allow you to conceal your IP address, providing anonymity during web scraping or automated testing.
- Geographic Targeting: Proxies enable you to access websites as if you were located in a different geographic region, useful for testing or gathering region-specific data.
- Bypass Restrictions: Proxies help bypass IP-based restrictions, allowing access to websites that may block requests from specific IPs.
- Load Management: Distributing requests through different IP addresses using proxies can help manage server loads and reduce the risk of being flagged for excessive traffic.
- Content Verification: Proxies aid in verifying content availability and consistency across different regions or networks, ensuring a more comprehensive testing approach.
Can using a proxy with Selenium help prevent IP blocking during web scraping?
Yes, using a proxy with Selenium can help prevent IP blocking during web scraping.
By rotating IP addresses through proxies, you distribute requests, making it harder for websites to detect and block automated traffic.
This enhances the resilience of your web scraping activities and reduces the risk of being flagged as a bot.
Are there free proxies available for use with Selenium, or do I need to purchase them?
Both free and paid proxies are available. Free proxies may have limitations in terms of reliability, speed, and security.
For more demanding tasks or projects, it’s often advisable to consider paid proxy services that offer better performance, dedicated support, and enhanced features.