Handling Errors Gracefully When URLs Fail in Python Requests

Feb 3, 2024 ยท 2 min read

One of the handy things about the Python Requests module is its built-in error handling. Requests does a lot of work under the hood to catch errors and raise exceptions when things go wrong. This saves developers time and effort when making HTTP requests.

However, you still need to write code to handle those errors gracefully to make your application resilient. In this article, we'll cover some common errors you may encounter when a URL fails and how to catch and handle them properly.

Common Errors

Some errors you may see when making a request to a bad URL:

  • ConnectionError - Usually raised when Requests cannot connect to the server at all. The host could be down or not listening on the port you requested.
  • Timeout - Server took too long to send a response. Can happen when overloaded servers queue requests.
  • HTTPError - 4xx or 5xx status codes signify server errors or bad requests.
  • RequestException - Base exception class that catches all the above.
  • Handling Errors

    We can use try/except blocks to catch requests errors:

    import requests
        response = requests.get("http://badurl")
    except requests.ConnectionError as ce:
        print("Failed to connect:", ce)
    except requests.Timeout as te:
        print("Request timed out:", te)  
    except requests.RequestException as re:  
        print("There was an error:", re)

    This allows your program to continue executing despite the failure.

    Retrying Failed Requests

    For transient errors, you may want to retry the request before giving up. The requests.Session class makes this simple:

    session = requests.Session()
    tries = 3
    for attempt in range(tries):
            response = session.get("http://flakyurl") 
        except requests.RequestException as re:
            if attempt == tries - 1:
                raise re 

    This pattern retries up to the tries limit before allowing the exception to bubble up.

    Handling errors gracefully ensures your application remains available despite upstream failures. Requests error handling combined with try/except gives you the tools to make resilient applications.

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you

    Try ProxiesAPI for free

    curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

    <!doctype html>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />


    Don't leave just yet!

    Enter your email below to claim your free API key: