Building a Simple Proxy Rotator with Rust and reqwest

Oct 2, 2023 · 4 min read

In the beginning stages of a web crawling project or when you have to scale it to only a few hundred requests, you might want a simple proxy rotator that uses the free proxy pools available on the internet to populate itself now and then.

We can use a website like https://sslproxies.org/ to fetch public proxies every few minutes and use them in our Rust projects.

This is what the site looks like:

And if you check the HTML using the inspect tool, you will see the full content is encapsulated in a table with the id proxylisttable

The IP and port are the first and second elements in each row.

We can use the following code to select the table and its rows to iterate on and further pull out the first and second elements of the elements.

Let's start by adding the dependencies we'll need:

use reqwest::{Client, Response};
use select::document::Document;
use select::predicate::Name;

This will give us reqwest for making HTTP requests, select for parsing HTML, and serde for JSON serialization.

Our basic fetch code looks like this:

let url = "<https://sslproxies.org/>";
let client = Client::new();

let response = client.get(url).send()?;

This uses reqwest to make a simple GET request to the proxy list URL.

Next we need to parse the HTML to extract the proxies. We can use select for this:

let document = Document::from(response.text()?);

let mut proxies = Vec::new();

for element in document.find(Name("tr")) {
    let ip = element.find(Name("td")).next().unwrap().text();
    let port = element.find(Name("td")).nth(1).unwrap().text();

    proxies.push(Proxy {
        ip: ip.to_string(),
        port: port.to_string(),
    });
}

This selects all the rows in the table, then pulls out the first and second elements to get the IP and port. We store each proxy in a Proxy struct.

Now let's wrap it in a function we can call periodically:

fn fetch_proxies() -> Vec<Proxy> {
    // request code
    // parse code

    proxies
}

And to fetch a random proxy each time:

use rand::seq::SliceRandom;

let proxies = fetch_proxies();
let random_proxy = proxies.choose(&mut rand::thread_rng()).unwrap();

Using rand's choose method to pick a random proxy.

Putting it all together:

use reqwest::{Client, Response};
use select::document::Document;
use select::predicate::Name;
use rand::seq::SliceRandom;

#[derive(Debug)]
struct Proxy {
    ip: String,
    port: String,
}

fn fetch_proxies() -> Vec<Proxy> {
    let url = "<https://sslproxies.org/>";
    let client = Client::new();

    let response = client.get(url).send()?;

    let document = Document::from(response.text()?);

    let mut proxies = Vec::new();

    for element in document.find(Name("tr")) {
        let ip = element.find(Name("td")).next().unwrap().text();
        let port = element.find(Name("td")).nth(1).unwrap().text();

        proxies.push(Proxy {
            ip: ip.to_string(),
            port: port.to_string(),
        });
    }

    proxies
}

fn main() {
    let proxies = fetch_proxies();

    let random_proxy = proxies.choose(&mut rand::thread_rng()).unwrap();

    println!("Using proxy {}:{}", random_proxy.ip, random_proxy.port);
}

This provides a complete Rust proxy rotator that can be called periodically to fetch and use random proxies. The same structure could be used as part of a web scraper or other HTTP client.

If you want to use this in production and want to scale to thousands of links, then you will find that many free proxies won't hold up under the speed and reliability requirements. In this scenario, using a rotating proxy service to rotate IPs is almost a must.

Otherwise, you tend to get IP blocked a lot by automatic location, usage, and bot detection algorithms.

Our rotating proxy server Proxies API provides a simple API that can solve all IP Blocking problems instantly.

  • With millions of high speed rotating proxies located all over the world • With our automatic IP rotation • With our automatic User-Agent-String rotation (which simulates requests from different, valid web browsers and web browser versions) • With our automatic CAPTCHA solving technology
  • Hundreds of our customers have successfully solved the headache of IP blocks with a simple API.

    A simple API can access the whole thing like below in any programming language.

    curl "<http://api.proxiesapi.com/?key=API_KEY&url=https://example.com>"
    

    We have a running offer of 1000 API calls completely free. Register and get your free API Key here.

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you


    Try ProxiesAPI for free

    curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

    <!doctype html>
    <html>
    <head>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
    ...

    X

    Don't leave just yet!

    Enter your email below to claim your free API key: