Scraping Multiple Pages in R with rvest and purrr

Oct 15, 2023 · 4 min read

Web scraping is useful to programmatically extract data from websites. Often you need to scrape multiple pages from a site to gather complete information. In this article, we'll see how to scrape multiple pages in R using the rvest and purrr packages.


To follow along, you'll need:

  • Basic R knowledge
  • R installed
  • rvest and purrr packages:
  • library(rvest)

    Define Base URL

    We'll scrape a blog - The page URLs follow a pattern:


    Let's define the base URL pattern:

    base_url <- "<>"

    The %d allows us to insert the page number.

    Specify Number of Pages

    Next, we'll specify how many pages to scrape. Let's scrape the first 5 pages:

    num_pages <- 5

    Loop Through Pages

    We can now loop from 1 to num_pages and construct the URL for each page:

    map(1:num_pages, function(page) {
      # Construct page URL
      url <- sprintf(base_url, page)
      # Code to scrape each page

    Send Request and Parse HTML

    Inside the loop, we'll send a GET request and parse the HTML using rvest:

    page <- read_html(url)
    html_nodes(page, "article") %>%
      map(function(article) {
        # Extract data from article

    This gives us parsed HTML nodes to extract data from.

    Extract Data

    Now within the loop we can extract information like title, URL, author etc from each article node:

    title <- html_node(article, "h2.entry-title") %>% html_text()
    url <- html_node(article, "a.entry-title-link") %>% html_attr("href")
    author <- html_node(article, " a") %>% html_text()

    Full Code

    Our full code to scrape 5 pages is:

    base_url <- ""
    num_pages <- 5
    map(1:num_pages, function(page) {
      url <- sprintf(base_url, page)
      page <- read_html(url)
      html_nodes(page, "article") %>%
        map(function(article) {
          title <- html_node(article, "h2.entry-title") %>% html_text()
          url <- html_node(article, "a.entry-title-link") %>% html_attr("href")
          author <- html_node(article, " a") %>% html_text()
          categories <- html_nodes(article, "div.entry-categories a") %>% 

    This allows us to scrape and extract data from multiple pages sequentially in R. The code can be extended to scrape any number of pages.


  • Use a base URL pattern with %d placeholder
  • Loop through pages with map
  • Construct each page URL
  • Send request and parse HTML with rvest
  • Extract data from HTML nodes
  • Print or store scraped data
  • Web scraping enables collecting large datasets programmatically. With the techniques here, you can scrape and extract information from multiple pages of a website in R.

    While these examples are great for learning, scraping production-level sites can pose challenges like CAPTCHAs, IP blocks, and bot detection. Rotating proxies and automated CAPTCHA solving can help.

    Proxies API offers a simple API for rendering pages with built-in proxy rotation, CAPTCHA solving, and evasion of IP blocks. You can fetch rendered pages in any language without configuring browsers or proxies yourself.

    This allows scraping at scale without headaches of IP blocks. Proxies API has a free tier to get started. Check out the API and sign up for an API key to supercharge your web scraping.

    With the power of Proxies API combined with Python libraries like Beautiful Soup, you can scrape data at scale without getting blocked.

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you

    Try ProxiesAPI for free

    curl ""

    <!doctype html>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />


    Don't leave just yet!

    Enter your email below to claim your free API key: