Making HTTP Requests in Python: Requests and urllib3 Explained

Feb 3, 2024 ยท 2 min read

When writing Python code that interacts with web APIs or crawls websites, you'll likely need to make HTTP requests to fetch or send data. The two most popular libraries for making HTTP requests in Python are requests and urllib3.

Requests - Simple and Pythonic

The requests library provides a simple, Pythonic way to make HTTP calls. Here's a quick example to fetch a web page:

import requests

response = requests.get('https://www.example.com')
print(response.text)

requests handles a lot of low-level details like handling cookies, retries, connection pooling, and more for you. This makes it very convenient for basic HTTP needs.

urllib3 - Lower Level Access

The urllib3 library is a lower level tool that the requests library itself builds upon. It handles nitty-gritty details of the HTTP protocol like managing connections, but doesn't provide the same convenience methods.

Here's how you might fetch a web page with urllib3:

import urllib3

http = urllib3.PoolManager()
response = http.request('GET', 'https://www.example.com')
print(response.data)

The advantage of urllib3 is it allows more customization and direct access to things like headers and status codes.

When to Use Each

For most purposes, I'd recommend requests for its simplicity. But if you need finer grain control over HTTP requests or performance optimization, urllib3 can be useful.

The two libraries can also be used together, with urllib3 handling the low-level stuff and requests providing some high-level conveniences.

Browse by tags:

Browse by language:

Tired of getting blocked while scraping the web?

ProxiesAPI handles headless browsers and rotates proxies for you.
Get access to 1,000 free API credits, no credit card required!