Unlocking Async Performance with Asyncio Redis

Mar 25, 2024 ยท 3 min read

Redis is a popular in-memory data store known for its speed and versatility. By combining Redis with Python's asyncio module, you can build extremely fast and scalable applications. In this guide, we'll explore how to use Redis in an async way to maximize performance.

Why Asyncio?

Python's asyncio module allows you to write code in an asynchronous, non-blocking style. This means that instead of waiting for IO-bound operations like network requests to complete, asyncio handles them in the background while your code continues running.

Here's a quick example:

import asyncio

async def fetch_data():
    print('fetching...')
    await asyncio.sleep(2) # pretend waiting for IO
    print('done!')

async def print_numbers():
    for i in range(10):
        print(i)
        await asyncio.sleep(0.2)

async def main():
    task1 = asyncio.create_task(fetch_data())
    task2 = asyncio.create_task(print_numbers())

    await task1
    await task2

asyncio.run(main())

This concurrently runs fetch_data and print_numbers without blocking the main thread. Asyncio allows you to achieve very high throughput by avoiding unnecessary waiting.

Async Usage with aioredis

To use Redis in an async way, we'll use the handy aioredis library. Here's how we might implement an async cache:

import asyncio
import aioredis

async def get_cache_data(key):
    redis = aioredis.from_url("redis://localhost")
    value = await redis.get(key)
    redis.close()
    return value

async def main():
    data = await get_cache_data("my-key")
    print(f"Cached data is {data}")

asyncio.run(main())

The key lines are:

  • aioredis.from_url() connects to Redis
  • await redis.get() sends a command and waits asynchronously for the result
  • We close the connection when done
  • All Redis commands are available as async methods like get. Under the hood, commands are pipelined to maximize throughput.

    Pipelining Commands

    In addition to converting single commands to async style, aioredis pipelines groups of commands for better performance:

    async def write_cache_data(key, value):
        redis = await aioredis.from_url("redis://localhost")
        
        await redis.set(key, value)
        await redis.expire(key, 60)
        
        redis.close()
        await redis.wait_closed() 
    
    asyncio.run(write_cache_data("my-key", "cached-value"))

    Here we pipeline the SET and EXPIRE commands together. This batches them into a single request instead of two separate round-trips.

    Pipelining avoids extra network overhead and significantly speeds up sequential commands. Make sure to use await redis.wait_closed() at the end to ensure the pipeline finishes.

    Pub/Sub with AsyncIter

    Redis pub/sub is a great way to distribute messages. Aioredis makes it easy to subscribe asynchronously:

    async def subscribe(channel):
        redis = await aioredis.from_url("redis://localhost")
        
        channel = await redis.subscribe(channel)
        async for message in channel[0].iter():
            # process message
            print(message)
    
    asyncio.run(subscribe("my-channel")) 

    We use the async iterator channel[0].iter() to asynchronously wait and receive messages as they arrive. This keeps our event loop running instead of blocking.

    Best Practices

    Here are some tips for smooth sailing with asyncio Redis:

  • Use connection pooling - Opening a separate connection for every operation adds overhead. A connection pool like aioredis-pool streamlines this.
  • Watch out for blocking code - Any blocking call like time.sleep() will pause the entire event loop. Use await asyncio.sleep() instead.
  • Scale with multiple event loops - For CPU-bound processing, run separate event loops across multiple threads or processes.
  • Asyncio allows you to build extremely fast, concurrent programs by properly delegating work. By applying its techniques to Redis, we can maximize throughput and take full advantage of Redis' speed. Give it a try on your next Python project!

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you


    Try ProxiesAPI for free

    curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

    <!doctype html>
    <html>
    <head>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
    ...

    X

    Don't leave just yet!

    Enter your email below to claim your free API key: