Streaming Downloads with Python Requests

Feb 3, 2024 ยท 2 min read

When downloading files in Python using the requests library, you may want to stream the response body instead of loading the entire file contents into memory at once. Streaming the response allows you to handle large downloads without running out of memory, and start processing the data before the download completes.

Why Stream Downloads?

Normally when you call response.content, requests will load the entire response body into memory. This is fine for small downloads, but can cause issues with larger files.

Streaming the response with response.iter_content() iterates over the content in chunks, so you can handle large downloads without buffering everything in memory.

Benefits include:

  • Lower memory usage for large downloads
  • Start processing data sooner
  • Chunked downloads are more resilient to failures
  • Streaming Example

    Here's how to stream a download while printing progress to the console:

    import requests
    import shutil
    
    url = 'http://example.com/large_file.zip'
    
    r = requests.get(url, stream=True)
    
    with open('large_file.zip', 'wb') as f:
        for chunk in r.iter_content(chunk_size=1024*8): 
            if chunk:
                print(f"Downloading {r.headers.get('content-length')} bytes...") 
                f.write(chunk)

    The key points are:

  • Set stream=True to avoid loading content into memory
  • Use iter_content() to iterate over response chunks
  • Write each chunk to a file object
  • This streams the download while printing progress. We handle the response in chunks instead of one large buffer for lower memory usage.

    Handling Compressed Streams

    For compressed streams like gzip/deflate, use iter_lines() instead to decode the compressed stream. Setting decode_unicode=True will properly handle character encodings as well.

    This provides a simple way to stream large downloads in Python while avoiding buffering entire file contents in memory!

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you


    Try ProxiesAPI for free

    curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

    <!doctype html>
    <html>
    <head>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
    ...

    X

    Don't leave just yet!

    Enter your email below to claim your free API key: