Using aiohttp for Easy and Powerful Reverse Proxying in Python

Mar 3, 2024 ยท 4 min read

Reverse proxying is an incredibly useful technique for forwarding requests from one server to another in a transparent way. This opens up all sorts of possibilities like load balancing, centralized authentication, caching, and more.

The Python aiohttp library makes setting up a reverse proxy simple and easy, while still providing a powerful and customizable solution. In this article, I'll walk through how to use aiohttp to build a basic reverse proxy, explain the core concepts, and show some more advanced usage examples.

What Exactly is a Reverse Proxy?

Simply put, a reverse proxy is a server that forwards requests to one or more backend servers transparently. When a client sends a request to a reverse proxy, the proxy forwards it to the appropriate backend server, gets the response, and then sends it back to the client.

This allows the backend servers to focus on serving application logic while the proxy handles tasks like security, caching, compression, etc. The client has no knowledge that it's talking to a proxy, not the real server.

Some common examples where reverse proxies are used:

  • Load balancing - The proxy can distribute requests across multiple backend servers to spread load.
  • Web acceleration - Caching and compressing responses at the proxy relieves work from the backend.
  • Security - The proxy can add authentication, SSL encryption, rate limiting, etc.
  • Building a Simple Reverse Proxy with aiohttp

    The aiohttp library includes a ProxyResolver class that makes building proxies straightforward. Here is a simple example:

    import aiohttp
    import aiohttp_cors
    async def handle_request(request):
      remote_url = "http://localhost:8080/" + request.rel_url.path_qs 
      async with["proxy_pool"].request(request.method, remote_url) as resp:
          text = await resp.text()
          return aiohttp.web.Response(
    app = aiohttp.web.Application()
    app.router.add_get("/", handle_request)
    resolver = aiohttp.TCPConnector()
    proxy_pool = aiohttp.ProxyConnector(resolver)
    app["proxy_pool"] = proxy_pool
    if __name__ == "__main__":

    Let's break down what's happening:

  • A ProxyConnector is created with a TCPConnector for making connections. This is assigned to the app on a key called proxy_pool.
  • The handle_request handler gets called on any requests.
  • It reconstructs the full URL to the backend server based on the path.
  • Using proxy_pool, a request is made to the remote server.
  • The response text, status, and headers are read and returned.
  • And that's it! Any requests get proxied transparently to the backend server.

    Handling Multiple Backends

    Right now this proxies everything to one backend URL. To support multiple backends, you can dynamically set the remote_url based on the request path, headers, etc.

    For example, /api routes could proxy to one API server, while /blog routes proxy to another blog application server.

    Streaming Responses

    Sometimes you may want to stream a response instead of loading it all into memory. This can be done by creating an aiohttp StreamResponse, like:

    resp = aiohttp.web.StreamResponse()
    resp.content_length = int(proxy_resp.content_length)
    resp.content_type = proxy_resp.content_type
    resp.status = proxy_resp.status
    await resp.prepare(request)
    async for chunk in proxy_resp.content.iter_chunked(1024):
      await resp.write(chunk)
    return resp

    This streams the content from the proxy response through to the client response.

    Advanced Proxying Techniques

    While building a basic reverse proxy is easy, aiohttp provides ways to construct more advanced proxies too.

    Customizing the Proxy Resolver

    The ProxyConnector uses a ProxyResolver under the hood to determine how to connect to backends.

    The default simple resolver uses plain TCP sockets. But you can also create a custom resolver to add connection pooling, UNIX domain socket support, SSH tunneling, and more.

    For example, here is resolver that uses a HTTP connection pool:

    import aiohttp
    class HttpResolver(aiohttp.AbstractResolver):
      def __init__(self):
        self._pool = aiohttp.ConnectionPool()
      async def resolve(self, host, port, family):
        return await self._pool.create_connection(host, port) 
    resolver = HttpResolver()
    proxy = aiohttp.ProxyConnector(resolver=resolver)

    Now all proxy requests will reuse HTTP connections from the pool.

    Subclassing ProxyConnector

    For ultimate control, you can subclass ProxyConnector and override methods like connect(), proxy_request(), etc.

    This allows implementing custom caching, authentication logic, request rewriting, and more.

    Here is a simple example that logs every proxied request:

    import logging
    class LoggingProxyConnector(aiohttp.TCPConnector):
      async def proxy_request(self, method, url):"Proxying request {method} {url}")
        return await super().proxy_request(method, url)
    proxy = LoggingProxyConnector() 

    The possibilities are endless when subclassing ProxyConnector!


    Hopefully this gives you a good overview of how to leverage aiohttp for building Python reverse proxy applications, both simple and advanced.

    The aiohttp documentation goes into more depth on all the configuration options and customization possible around proxying.

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you

    Try ProxiesAPI for free

    curl ""

    <!doctype html>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />


    Don't leave just yet!

    Enter your email below to claim your free API key: