Troubleshooting 403 Errors: cURL Works but Python Requests Gets Forbidden

Apr 2, 2024 ยท 3 min read

Have you ever encountered a situation where making a request to an API works fine using cURL on the command line, but fails with a 403 Forbidden status when using the Python Requests library? This frustrating discrepancy can be caused by several factors. In this article, we'll explore the root causes and solutions.

Requests and cURL Handle Sessions Differently

A key difference between cURL and Requests is how they handle cookies and sessions.

cURL will automatically send back any cookies set by the server, maintaining session state across requests. For example:

curl -v https://api.example.com/login # logs in and gets cookie
curl -v https://api.example.com/user # sends cookie, returns user data 

Meanwhile, Requests doesn't maintain session state automatically. You have to manually handle cookies across requests using a Requests Session object:

s = requests.Session()
s.get("https://api.example.com/login") # logs in, stores cookie 
s.get("https://api.example.com/user") # uses stored cookie 

So if an API requires session cookies, cURL may work while plain Requests calls fail with 403 Forbidden or 401 Unauthorized.

Check for CSRF Protection Middleware

Many web frameworks like Django and Ruby on Rails combat cross-site request forgery (CSRF) by requiring POST requests to include a CSRF token cookie or header.

cURL typically ignores these protections, while Requests will properly fail with 403 if the CSRF protection isn't handled.

Check your API framework's docs to see if it has CSRF middleware enabled. If so, you'll need to make sure your Requests calls include the proper CSRF token.

Double Check Authorization Headers

APIs often use request headers like Authorization to pass API keys, OAuth tokens, or basic auth credentials.

Make sure your Python code is actually sending the expected authorization header(s).

For example, you may have forgotten to pass the headers along in the Requests call:

#Oops, forgot to pass headers!

import requests

url = "https://api.acme.com/users"
headers = {"Authorization": "Bearer foo"} 

r = requests.get(url) # 403 Forbidden

Or maybe the header name/values don't exactly match what the API is expecting. Double check everything matches between the working cURL request and failing Python code.

Summary

  • Requests handles sessions and state differently than cURL - make sure to use Session objects.
  • Check for CSRF middleware that may require tokens.
  • Verify Python code passes through expected authorization headers.
  • Getting 403 errors in Python when cURL works can be frustrating. But methodically comparing the two approaches typically reveals the source of the discrepancy. Carefully inspecting authorization, cookies, headers, and body data solves most cases.

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you


    Try ProxiesAPI for free

    curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

    <!doctype html>
    <html>
    <head>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
    ...

    X

    Don't leave just yet!

    Enter your email below to claim your free API key: