Downloading Images from a Website with Perl and Mojo::DOM

Oct 15, 2023 · 4 min read

In this article, we will learn how to use Perl and the LWP::UserAgent and Mojo::DOM modules to download all the images from a Wikipedia page.

—-

Overview

The goal is to extract the names, breed groups, local names, and image URLs for all dog breeds listed on this Wikipedia page. We will store the image URLs, download the images and save them to a local folder.

Here are the key steps we will cover:

  1. Import required modules
  2. Send HTTP request to fetch the Wikipedia page
  3. Parse the page HTML using Mojo::DOM
  4. Find the table with dog breed data using a CSS selector
  5. Iterate through the table rows
  6. Extract data from each column
  7. Download images and save locally
  8. Print/process extracted data

Let's go through each of these steps in detail.

Modules

We need these core modules:

use LWP::UserAgent;
use Mojo::DOM;
use File::Path qw(make_path);
  • LWP::UserAgent - Sends HTTP requests
  • Mojo::DOM - Parses HTML/XML
  • File::Path - Create directories
  • Send HTTP Request

    To download the web page:

    my $url = '<https://commons.wikimedia.org/wiki/List_of_dog_breeds>';
    
    my $ua = LWP::UserAgent->new;
    $ua->agent('PerlScraper');
    
    my $res = $ua->get($url);
    

    We create a user agent object and provide a custom user agent string.

    Parse HTML

    To parse the HTML:

    my $dom = Mojo::DOM->new($res->content);
    

    The $dom object represents parsed HTML.

    Find Breed Table

    We use a CSS selector to find the table element:

    my $table = $dom->find('table.wikitable.sortable')->[0];
    

    This selects the

    tag with the required CSS classes.

    Iterate Through Rows

    We can iterate through the rows like this:

    foreach my $row ($table->find('tr')->slice(1)) {
    
      # Extract data
    
    }
    

    We loop through each

    skipping the first header row.

    Extract Column Data

    Inside the loop, we extract the column data:

    my ($name, $group, $local_name, $img) =
      map $_->text, $row->find('td, th')->each;
    
    my $photograph = $img->find('img')->attr('src');
    

    We use text for text and attr for attributes.

    Download Images

    To download and save images:

    if ($photograph) {
    
      my $img_data = $ua->get($photograph)->content;
    
      open my $fh, '>', "dog_images/$name.jpg";
      print $fh $img_data;
      close $fh;
    
    }
    

    We reuse the user agent to download the image and save it to a file.

    Store Extracted Data

    We store the extracted data in arrays:

    push @names, $name;
    push @groups, $group;
    push @local_names, $local_name;
    push @photographs, $photograph;
    

    The arrays can then be processed as needed.

    And that's it! Here is the full code:

    #!/usr/bin/perl
    
    use strict;
    use warnings;
    
    use LWP::UserAgent;
    use Mojo::DOM;
    use File::Path qw(make_path);
    
    # Arrays to store data
    my @names;
    my @groups;
    my @local_names;
    my @photographs;
    
    # User agent
    my $ua = LWP::UserAgent->new;
    $ua->agent('PerlScraper');
    
    # Fetch HTML
    my $url = '<https://commons.wikimedia.org/wiki/List_of_dog_breeds>';
    my $res = $ua->get($url);
    
    # Parse HTML
    my $dom = Mojo::DOM->new($res->content);
    
    # Find table
    my $table = $dom->find('table.wikitable.sortable')->[0];
    
    # Iterate rows
    foreach my $row ($table->find('tr')->slice(1)) {
    
      # Extract data
      my ($name, $group, $local_name, $img) =
        map $_->text, $row->find('td, th')->each;
    
      my $photograph = $img->find('img')->attr('src');
    
      # Download image
      if ($photograph) {
    
        my $img_data = $ua->get($photograph)->content;
    
        open my $fh, '>', "dog_images/$name.jpg";
        print $fh $img_data;
        close $fh;
    
      }
    
      # Store data
      push @names, $name;
      push @groups, $group;
      push @local_names, $local_name;
      push @photographs, $photograph;
    
    }
    

    This provides a complete Perl solution using LWP::UserAgent and Mojo::DOM to scrape data and images from HTML tables. The same approach can apply to many websites.

    While these examples are great for learning, scraping production-level sites can pose challenges like CAPTCHAs, IP blocks, and bot detection. Rotating proxies and automated CAPTCHA solving can help.

    Proxies API offers a simple API for rendering pages with built-in proxy rotation, CAPTCHA solving, and evasion of IP blocks. You can fetch rendered pages in any language without configuring browsers or proxies yourself.

    This allows scraping at scale without headaches of IP blocks. Proxies API has a free tier to get started. Check out the API and sign up for an API key to supercharge your web scraping.

    With the power of Proxies API combined with Python libraries like Beautiful Soup, you can scrape data at scale without getting blocked.

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you


    Try ProxiesAPI for free

    curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

    <!doctype html>
    <html>
    <head>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
    ...

    X

    Don't leave just yet!

    Enter your email below to claim your free API key: