Handling Timeouts Gracefully with aiohttp in Python

Feb 22, 2024 ยท 2 min read

When building asynchronous web applications and APIs in Python with the popular aiohttp library, properly handling timeouts is essential to ensure robustness. Timeouts can occur for various reasons - slow networks, overloaded servers, etc. In this article, we'll explore how to configure and handle timeouts gracefully in aiohttp.

Setting Request Timeouts

We can specify timeouts when making requests with aiohttp using the timeout parameter. This accepts a ClientTimeout object:

import aiohttp

timeout = aiohttp.ClientTimeout(total=60) 
async with aiohttp.ClientSession(timeout=timeout) as session:
    async with session.get('http://example.com') as response:
        # process response

This sets the total timeout to 60 seconds. The ClientTimeout class allows configuring the connect timeout and socket timeout individually too.

Handling Timeouts with try/except

We should wrap our requests in try/except blocks to catch potential timeouts:

    async with session.get('http://example.com') as response:
        # process response 
except aiohttp.ClientTimeout:
    # handle timeout

When a timeout occurs, it will raise a ClientTimeout exception, which we can catch and handle appropriately by retrying, returning a fallback response, or displaying an error message to the user.

Configuring Server Timeout

If building an aiohttp server, we can set the global timeout too which applies to all client connections using:

web.run_app(app, timeout=60)

This will close client connections that are idle for longer than 60 seconds.

Key Takeaways

  • Use ClientTimeout to configure request timeouts in aiohttp
  • Wrap requests in try/except blocks to catch ClientTimeout
  • Handle timeouts by retrying, returning fallback response, or displaying errors
  • Configure global timeout on aiohttp servers with timeout parameter
  • Carefully handling timeouts in aiohttp ensures our asynchronous apps remain performant and robust.

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you

    Try ProxiesAPI for free

    curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

    <!doctype html>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />


    Don't leave just yet!

    Enter your email below to claim your free API key: