Solving Cloudflare Redirect Loops with HtmlUnit in Java

Apr 2, 2024 ยท 3 min read

When scraping or testing websites protected by Cloudflare, you may encounter redirect loops that prevent accessing the final HTML page. This occurs because Cloudflare checks for bots and blocks automated requests to protect sites from abuse. However, there are ways to properly configure HtmlUnit to bypass these protections.

The Cloudflare Challenge

Many sites use Cloudflare to protect against DDoS attacks, spam bots, and other threats. Cloudflare acts as a reverse proxy, sitting in front of the origin web server and applying rules to filter requests.

One of the techniques Cloudflare employs is checking for browser characteristics like cookies, headers, and JavaScript execution. Requests lacking these human-like qualities may be flagged as bots and blocked or endlessly redirected.

This causes problems for tools like HtmlUnit that programmatically request pages. Out of the box, HtmlUnit connects directly without mimicking a real browser close enough to get past Cloudflare.

WebClient webClient = new WebClient();

HtmlPage page = webClient.getPage(""); 
// Endless redirect loops or access denied

Configuring the WebClient

To properly imitate a browser, our WebClient needs tweaking. Here are key areas to address:

User Agent

We must spoof a real desktop or mobile browser agent string:



Enable JavaScript execution in the client:



Allow cookies and maintain them across page requests:

CookieManager cookieManager = new CookieManager();


Set up cache storage to mimic browser resource caching:

webClient.getOptions().setCache(new InMemoryCache());

Getting Past the Redirects

With a tuned WebClient, we can now access Cloudflare sites properly. For example:

WebClient webClient = new WebClient();

// Apply configurations listed above

HtmlPage page = webClient.getPage("");

// Access granted, extract page content 

However, on some sites, an initial redirect to an intermediate URL occurs before landing on the true destination page: 

We need to follow these hops programmatically:


HtmlPage page1 = webClient.getPage(""); 

HtmlPage page2 = (HtmlPage) page1.getEnclosingWindow().getTopWindow().getEnclosedPage();

// Extract content from final page2  

Now page2 contains the true protected page content past Cloudflare.

Dealing with Bot Detection

Sometimes custom JavaScript executes on sites trying to catch automation tools. For example:

// Site JS code  

var start = new Date().getTime();

while(new Date().getTime() < start + 1000); // Delay

if(took < 1000) {
  // Flag as bot

We can override the JavaScript environment to skip past traps like this:

ScriptEngine engine = new ScriptEngine(new HtmlUnitScriptEngine(), 
  new ClassShutterObject());


Key Takeaways

  • Cloudflare blocking can cause scraping and testing tools like HtmlUnit to be endlessly redirected or denied access.
  • Properly configuring the WebClient (browser emulation, cookies, caching, etc.) allows bypassing these protections.
  • Additional tweaks to follow redirects and override JS bot detection may be needed on some sites.
  • With the right setup, HtmlUnit can programmatically access sites shielded by Cloudflare.
  • Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you

    Try ProxiesAPI for free

    curl ""

    <!doctype html>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />


    Don't leave just yet!

    Enter your email below to claim your free API key: