Passing Data in URLs with urllib Query Parameters in Python

Feb 8, 2024 ยท 2 min read

When making HTTP requests in Python using the urllib module, you can pass additional data in the URL using query parameters. Query parameters allow you to add key/value pairs to the end of a URL separated by the ? and & characters.

For example: 

This URL contains two query parameters - key1 with a value of value1 and key2 with a value of value2.

To add query parameters to URLs in urllib, you use the urlencode() method to encode the parameters into a string that can be appended to the URL:

import urllib.parse

params = {'key1': 'value1', 'key2': 'value2'}
query_string = urllib.parse.urlencode(params)
url = "" + query_string

The urlencode() method takes a dictionary and converts it into a string in the format key1=value1&key2=value2. We append this string to the base URL path with a ? character in between.

Some key tips when working with query parameters in urllib:

  • Values must be URL encoded if they contain special characters. The urlencode() method handles this encoding automatically.
  • Order of parameters typically does not matter.
  • There is a maximum length URL browsers and servers will accept - try to avoid extremely long parameter values if possible.
  • Query parameters are visible in the browser URL which may be a security risk for sensitive data.
  • Query parameters are very useful for passing data in a simple and standardized way without requiring a request body. Some example use cases:

  • Passing search, filter or pagination options to an API
  • Including configuration options for API responses
  • Passing identifiers or other contextual data
  • Overall, urllib query parameters provide a straightforward way to pass data through URLs when making HTTP requests from your Python code. By understanding how to properly encode and append query strings, you can add additional functionality and customization to your urllib requests.

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you

    Try ProxiesAPI for free

    curl ""

    <!doctype html>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />


    Don't leave just yet!

    Enter your email below to claim your free API key: