Scraping Multiple Pages in Javascript with Cheerio

Oct 15, 2023 · 4 min read

Web scraping is useful to programmatically extract data from websites. Often you need to scrape multiple pages from a site to gather complete information. In this article, we'll see how to scrape multiple pages in Javascript using the cheerio library.

Prerequisites

To follow along, you'll need:

  • Basic Javascript knowledge
  • Node.js installed
  • cheerio library installed:
  • npm install cheerio
    

    Import Modules

    We'll need the request module to fetch pages and cheerio to parse HTML:

    const request = require('request');
    const cheerio = require('cheerio');
    

    Define Base URL

    We'll scrape a blog - https://copyblogger.com/blog/. The page URLs follow a pattern:

    <https://copyblogger.com/blog/>
    <https://copyblogger.com/blog/page/2/>
    <https://copyblogger.com/blog/page/3/>
    

    Let's define the base URL pattern:

    const baseUrl = '<https://copyblogger.com/blog/page/{}/>';
    

    The {} allows us to insert the page number.

    Specify Number of Pages

    Next, we'll specify how many pages to scrape. Let's scrape the first 5 pages:

    const numPages = 5;
    

    Loop Through Pages

    We can now loop from 1 to numPages and construct the URL for each page:

    for(let pageNum = 1; pageNum <= numPages; pageNum++) {
    
      // Construct page URL
      const url = baseUrl.replace('{}', pageNum);
    
      // Code to scrape each page
    
    }
    

    Send Request and Check Response

    Inside the loop, we'll use request() to fetch the page URL:

    request(url, (error, response, html) => {
    
      if(!error){
        // Page retrieved, can parse HTML
      } else {
        console.log('Error retrieving page ' + pageNum);
      }
    
    });
    

    We check for any error to ensure the request succeeded.

    Parse HTML with Cheerio

    If no error, we can parse the HTML using cheerio:

    const $ = cheerio.load(html);
    

    The $ gives a jQuery-style selector to extract data.

    Extract Data

    Now within the loop we can use $ to find and extract data from each page.

    For example, to get article elements:

    const articles = $('article');
    

    We can loop through articles and extract information like title, URL, author etc.

    Full Code

    Our full code to scrape 5 pages is:

    const request = require('request');
    const cheerio = require('cheerio');
    
    const baseUrl = '<https://copyblogger.com/blog/page/{}/>';
    const numPages = 5;
    
    for(let pageNum = 1; pageNum <= numPages; pageNum++) {
    
      const url = baseUrl.replace('{}', pageNum);
    
      request(url, (error, response, html) => {
    
        if(!error){
    
          const $ = cheerio.load(html);
    
          const articles = $('article');
    
          articles.each((index, element) => {
    
            // Extract data from article
    
            const title = $(element).find('h2.entry-title').text().trim();
            const url = $(element).find('a.entry-title-link').attr('href');
            const author = $(element).find('div.post-author a').text().trim();
    
            const categories = [];
            $(element).find('div.entry-categories a').each((i, el) => {
              categories.push($(el).text().trim());
            });
    
            // Print data
            console.log('Title: ' + title);
            console.log('URL: ' + url);
            console.log('Author: ' + author);
            console.log('Categories: ' + categories.join(', '));
            console.log();
    
          });
    
        } else {
          console.log('Error retrieving page ' + pageNum);
        }
    
      });
    
    }
    

    This allows us to scrape and extract data from multiple pages sequentially. The code can be extended to scrape any number of pages.

    Summary

  • Use a base URL pattern with {} placeholder
  • Loop through pages with for loop
  • Construct each page URL
  • Fetch pages with request() and check for errors
  • Parse HTML using cheerio
  • Find and extract data inside loop
  • Print or store scraped data
  • Web scraping enables collecting large datasets programmatically. With the techniques here, you can scrape and extract information from multiple pages of a website in Javascript.

    While these examples are great for learning, scraping production-level sites can pose challenges like CAPTCHAs, IP blocks, and bot detection. Rotating proxies and automated CAPTCHA solving can help.

    Proxies API offers a simple API for rendering pages with built-in proxy rotation, CAPTCHA solving, and evasion of IP blocks. You can fetch rendered pages in any language without configuring browsers or proxies yourself.

    This allows scraping at scale without headaches of IP blocks. Proxies API has a free tier to get started. Check out the API and sign up for an API key to supercharge your web scraping.

    With the power of Proxies API combined with Python libraries like Beautiful Soup, you can scrape data at scale without getting blocked.

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you


    Try ProxiesAPI for free

    curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

    <!doctype html>
    <html>
    <head>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
    ...

    X

    Don't leave just yet!

    Enter your email below to claim your free API key: