Making Asynchronous HTTP Requests in Python with aiohttp Connectors

Feb 22, 2024 ยท 2 min read

The aiohttp library is a powerful tool for making asynchronous HTTP requests in Python. One key component is the aiohttp.TCPConnector, which manages the connection pooling for HTTP and HTTPS requests.

Why Use a Connector?

By default, every HTTP request will open a new TCP connection. This can be inefficient, as opening connections has overhead and latency.

A connector allows request calls to reuse open connections from a pool, avoiding new connections where possible. This improves performance.

Key Connector Options

Here are some key options to configure on a TCPConnector:

  • limit - The total number of open connections in the pool. Defaults to 100. Increase for higher concurrency, decrease to limit resources.
  • limit_per_host - The number of connections per host. Defaults to 0 (no limit). Set to limit connections per backend server.
  • enable_cleanup_closed - Close and clean up connections when they are closed by the server. Defaults to True.
  • force_close - Force closed connections to be discarded rather than reused. Defaults to False. Set to True if you encounter broken connections.
  • For example, here is how to create a connector with customized options:

    connector = aiohttp.TCPConnector(

    And then we can pass it when creating a client session:

    async with aiohttp.ClientSession(connector=connector) as session:
        ... # Make requests using this session

    Tuning Timeouts

    The connector also manages timeouts - like keepalive_timeout to close idle connections, and connect_timeout for initial connection establishment.

    Tuning these and other options can optimize performance for your specific HTTP workloads.


    The aiohttp.TCPConnector manages performant connection pooling and reuse in aiohttp. Tuning its options like limits, cleanups, and timeouts is key to optimizing asynchronous IO performance. Careful connector configuration can lead to faster and more robust HTTP clients and services in Python.

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you

    Try ProxiesAPI for free

    curl ""

    <!doctype html>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />


    Don't leave just yet!

    Enter your email below to claim your free API key: