Bypassing Cloudflare Error 1015 in Rust

Apr 15, 2024 ยท 4 min read

If you're into web scraping, you've probably encountered the dreaded Cloudflare Error 1015. It's like hitting a brick wall when you're just trying to gather some data.

Cloudflare is a popular service that many websites use for protection and optimization. While it's great for website owners, it can be a real pain for web scrapers.

What is Cloudflare Error 1015?

Cloudflare Error 1015 is an HTTP status code that means "You are being rate limited." In other words, you're making too many requests too quickly, and Cloudflare is putting the brakes on your scraping.

This error is triggered by Cloudflare's bot protection mechanisms. They're designed to prevent malicious bots from overwhelming websites with requests.

How to Identify Cloudflare Error 1015

When you encounter Cloudflare Error 1015, you'll usually see a message like this in your scraper's output:

Cloudflare Error 1015 - You are being rate limited.

You might also see a more detailed error page if you visit the URL in your browser. It will likely mention rate limiting and ask you to complete a CAPTCHA to prove you're human.

Why Does Cloudflare Error 1015 Happen?

Cloudflare Error 1015 happens because your scraper is making too many requests too quickly. This triggers Cloudflare's bot protection, which thinks you're a malicious bot trying to overload the website.

There are a few reasons why your scraper might be making too many requests:

  • You're not adding any delays between requests
  • You're using a high number of concurrent requests
  • You're not rotating your IP address or user agent
  • How to Avoid Cloudflare Error 1015

    To avoid triggering Cloudflare's bot protection and getting hit with Error 1015, you need to make your scraper look more human-like. Here are some tips:

    1. Add Delays Between Requests

    One of the easiest ways to avoid Error 1015 is to add delays between your scraper's requests. This makes your scraper look more like a human browsing the site.

    You can use Rust's std::thread::sleep function to add random delays:

    use std::time::Duration;
    use rand::Rng;
    // Make a request
    let response = reqwest::get(url).await.unwrap().text().await.unwrap();
    // Add a random delay between 1 and 5 seconds

    2. Limit Concurrent Requests

    Another way to avoid Error 1015 is to limit the number of concurrent requests your scraper makes. Instead of bombarding the site with multiple requests at once, make them one at a time.

    You can use a simple for loop to make sequential requests:

    let urls = vec![url1, url2, url3];
    for url in urls {
        let response = reqwest::get(url).await.unwrap().text().await.unwrap();
        // Process the response

    3. Rotate IP Addresses and User Agents

    Cloudflare can also identify your scraper by your IP address and user agent string. To avoid this, you can rotate them for each request.

    You can use a proxy service to rotate your IP address. Here's an example using the reqwest library and a proxy:

    let proxy = reqwest::Proxy::http(format!("http://{}:{}", proxy_ip, proxy_port)).unwrap();
    let client = reqwest::Client::builder()
    let response = client.get(url).send().await.unwrap().text().await.unwrap();

    To rotate user agents, you can use an array of user agent strings and select one randomly for each request:

    let user_agents = [
        "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/537.36",
        "Mozilla/5.0 (iPhone; CPU iPhone OS 14_4 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Mobile/15E148 Safari/604.1",
        "Mozilla/5.0 (compatible; Googlebot/2.1; +"
    let user_agent = user_agents[rand::thread_rng().gen_range(0..user_agents.len())];
    let client = reqwest::Client::builder()
    let response = client.get(url).send().await.unwrap().text().await.unwrap();

    4. Use Cloudflare Bypassing Techniques

    There are also some more advanced techniques for bypassing Cloudflare's bot protection. These include:

  • Solving CAPTCHAs automatically using services like 2captcha or Anti-Captcha
  • Using a headless browser like Puppeteer or Selenium to simulate human behavior
  • Leveraging Cloudflare's own APIs to get the data you need
  • These techniques are more complex and beyond the scope of this article, but they're worth exploring if you're serious about web scraping.


    Cloudflare Error 1015 is a common obstacle for web scrapers, but it's not insurmountable. By making your scraper look more human-like, you can avoid triggering Cloudflare's bot protection and get the data you need.

    Remember to add delays between requests, limit concurrent requests, and rotate your IP address and user agent. If you're still hitting Error 1015, consider exploring more advanced bypassing techniques.

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you

    Try ProxiesAPI for free

    curl ""

    <!doctype html>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />


    Don't leave just yet!

    Enter your email below to claim your free API key: