The whole purpose of a proxy server is to hide the origin of the request to a particular website.
This routing might be needed because the user might be restricted from accessing the particular website because of the user's geographic location. His/her usage may be over the limit, such as in the cases of web crawling and web scraping.
A proxy server's main job is to route the requests from an IP other than the original one and fetch the data. So it is easy for proxy service providers like us (Proxies API - we are a rotating proxy network) to just spin up a few servers each with its IP and route our customers through them, right? Wrong! Because most of the time, the web servers can quickly 'tell' that the requests are coming from a range typically used by server pool providers like AWS or Digital Ocean. And they can just block the whole range and be done with it.
These types of IPs are called data center IPs, and they tend to get blocked by most major websites.
Now, the webservers in contrasts implicitly trust any IP coming from known IP ranges for prominent ISPs (Internet Service Providers) like AT&T, Vodaphone, etc. You know the kind of IP you are currently using to browse this article.
So having a pool of these genuine, ISP provided, residential IPs at our disposal makes crawling much more reliable and much less likely to get blocked. Remember that just having a residential IP is not enough as too much usage, and it will also get blocked. Such providers like Proxies API provide Rotating Residential Proxy Network with pools for millions of IPs to make web crawlers scale quickly.