Fetching Images Asynchronously with aiohttp in Python

Mar 3, 2024 ยท 2 min read

When building web applications in Python, you'll often need to download images or files from another server. The aiohttp library makes this easy with asynchronous requests.

Why Asynchronous Requests?

Synchronous requests block the execution of your code until the response is received. With aiohttp, the requests don't block - your code continues executing while the response downloads in the background.

This allows you to make multiple requests in parallel very efficiently. This is perfect for fetching multiple images!

Basic Usage

First install aiohttp:

pip install aiohttp

Then we can fetch an image like so:

import aiohttp
import asyncio

async def get_image(url):
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
            image_bytes = await response.read()
            return image_bytes
            
loop = asyncio.get_event_loop()
image_data = loop.run_until_complete(get_image('https://example.com/image.png'))

The key things to note:

  • We use async with to create a ClientSession
  • We await the response to get the image bytes
  • The outer function is marked async and we call it with loop.run_until_complete
  • This will download the image in the background without blocking our code!

    Streaming Responses

    For very large images, we may want to stream the response to avoid loading the entire file into memory.

    We can do this by iterating through the response content instead of calling response.read():

    async with response:
        async for chunk in response.content.iter_chunked(1024):
            f.write(chunk) 

    Handling Errors

    We can catch exceptions from the request using normal try/except blocks:

    try:
        async with session.get(url) as response:
           image = await response.read() 
    except aiohttp.ClientConnectorError:
        print("Connection error") 

    This covers the basics of fetching images asynchronously with aiohttp!

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you


    Try ProxiesAPI for free

    curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

    <!doctype html>
    <html>
    <head>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
    ...

    X

    Don't leave just yet!

    Enter your email below to claim your free API key: