Why Your Python Requests Timeout May Not Be Timing Out As Expected

Feb 3, 2024 ยท 2 min read

When using the requests library in Python, you can specify a timeout value to prevent your code from hanging indefinitely if a request gets stuck. However, sometimes you may notice that even when setting a timeout, your requests don't actually timeout as expected.

There are a few reasons why this can happen:

Timeout Specifies Total Time, Not Just Connection

The timeout parameter in the requests.get() and other requests methods specifies the total time allowed for the request. This includes the connection time, any time spent waiting for a response from the server, as well as the time to download the full response.

So if you have a very large response, the full content may still be downloaded even if it takes longer than the timeout value:

import requests

response = requests.get('http://large-file-server.com/large-file', timeout=1)
# Large file continues downloading for 10s even though timeout was 1s

Streaming Responses Ignore Timeout

If you are streaming a response using response.iter_content(), the timeout value will be ignored. Streaming requests will continue trying to download content indefinitely regardless of the timeout set initially.

Use a "Connect" Timeout Instead

To timeout strictly based on connection time, you can use the timeout parameter to specify connection time only. The separate read timeout parameter handles total response time:

requests.get('https://website.com', timeout=3.05, read_timeout=5) 

Now the connection will timeout after 3.05s rather than waiting for the full response.

Handle Timeout Errors

Make sure to properly catch any Timeout errors that may be raised if a request hits the timeout value:

    response = requests.get('https://website.com', timeout=3)
except requests.Timeout:
    # Handle the timeout case

Setting timeouts in requests can prevent hanging requests, but pay close attention to what the timeout values actually apply to. Use connect timeouts and error handling to make sure requests failing fast as expected.

Browse by tags:

Browse by language:

The easiest way to do Web Scraping

Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you

Try ProxiesAPI for free

curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

<!doctype html>
    <title>Example Domain</title>
    <meta charset="utf-8" />
    <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1" />


Don't leave just yet!

Enter your email below to claim your free API key: