Scraping Craigslist Listings with CSharp

Oct 1, 2023 · 4 min read

This article will explain how to scrape Craigslist apartment listings using C# and HtmlAgilityPack. We will go through each line of code to understand what it is doing.

First install the HtmlAgilityPack NuGet package:

Install-Package HtmlAgilityPack

And include it in your code:

using HtmlAgilityPack;

HtmlAgilityPack allows us to parse and query HTML documents.

Next we set the URL to scrape - Craigslist San Francisco apartments:

string url = "<https://sfbay.craigslist.org/search/apa>";

We use HttpClient to get the page content:

HttpClient client = new HttpClient();
string html = client.GetStringAsync(url).Result;

Now we can load the HTML into an HtmlDocument:

HtmlDocument doc = new HtmlDocument();
doc.LoadHtml(html);

If you check the source code of Craigslist listings you can see that the listings area code looks something like this…

You can see the code block that generates the listing…

<li class="cl-static-search-result" title="Situated in Sunnyvale!, Recycling Center, 1/BD">
            <a href="https://sfbay.craigslist.org/sby/apa/d/santa-clara-situated-in-sunnyvale/7666802370.html">
                <div class="title">Situated in Sunnyvale!, Recycling Center, 1/BD</div>

                <div class="details">
                    <div class="price">$2,150</div>
                    <div class="location">
                        sunnyvale
                    </div>
                </div>
            </a>
        </li>

its encapsulated in the cl-static-search-result class. We also need to get the title class div and the price and location class divs to get all the data

Craigslist uses

  • tags with class "cl-static-search-result" for listings. We query them:

    var listings = doc.DocumentNode.SelectNodes("//li[@class='cl-static-search-result']");
    

    We loop through each listing and extract the info:

    foreach (var listing in listings)
    {
      var title = listing.SelectSingleNode("./div[1]").InnerText;
    
      var price = listing.SelectSingleNode("./div[2]").InnerText;
    
      var location = listing.SelectSingleNode("./div[3]").InnerText;
    
      var link = listing.SelectSingleNode("./a").GetAttributeValue("href", null);
    
      Console.WriteLine($"{title} {price} {location} {link}");
    }
    

    The full C# code is:

    using HtmlAgilityPack;
    using System.Net.Http;
    
    string url = "<https://sfbay.craigslist.org/search/apa>";
    
    HttpClient client = new HttpClient();
    string html = client.GetStringAsync(url).Result;
    
    HtmlDocument doc = new HtmlDocument();
    doc.LoadHtml(html);
    
    var listings = doc.DocumentNode.SelectNodes("//li[@class='cl-static-search-result']");
    
    foreach (var listing in listings)
    {
      var title = listing.SelectSingleNode("./div[1]").InnerText;
    
      var price = listing.SelectSingleNode("./div[2]").InnerText;
    
      var location = listing.SelectSingleNode("./div[3]").InnerText;
    
      var link = listing.SelectSingleNode("./a").GetAttributeValue("href", null);
    
      Console.WriteLine($"{title} {price} {location} {link}");
    }
    

    This walks through the C# code to scrape Craigslist listings.

    This is great as a learning exercise but it is easy to see that even the proxy server itself is prone to get blocked as it uses a single IP. In this scenario where you may want a proxy that handles thousands of fetches every day using a professional rotating proxy service to rotate IPs is almost a must.

    Otherwise, you tend to get IP blocked a lot by automatic location, usage, and bot detection algorithms.

    Our rotating proxy server Proxies API provides a simple API that can solve all IP Blocking problems instantly.

  • With millions of high speed rotating proxies located all over the world,
  • With our automatic IP rotation
  • With our automatic User-Agent-String rotation (which simulates requests from different, valid web browsers and web browser versions)
  • With our automatic CAPTCHA solving technology,
  • Hundreds of our customers have successfully solved the headache of IP blocks with a simple API.

    The whole thing can be accessed by a simple API like below in any programming language.

    In fact, you don't even have to take the pain of loading Puppeteer as we render Javascript behind the scenes and you can just get the data and parse it any language like Node, Puppeteer or PHP or using any framework like Scrapy or Nutch. In all these cases you can just call the URL with render support like so:

    curl "<http://api.proxiesapi.com/?key=API_KEY&render=true&url=https://example.com>"
    
    

    We have a running offer of 1000 API calls completely free. Register and get your free API Key.

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you


    Try ProxiesAPI for free

    curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

    <!doctype html>
    <html>
    <head>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
    ...

    X

    Don't leave just yet!

    Enter your email below to claim your free API key: