Category: Proxy

Quick Guide to Using Proxies with Axios in Node.js in 2025

7 mins read Created Date: September 26, 2025   Updated Date: September 26, 2025

You might need a proxy when scraping with Axios for the following reasons:

  • Anonymity: Hide your real IP address from target websites
  • Bypassing Rate Limits: Distribute requests across multiple IP addresses
  • Bypassing Geo-Blocks: Access region-specific content and pricing
  • Bypassing Anti-Bots: Use residential/mobile IPs to avoid CAPTCHAs and blocks

In this guide, we’ll show you exactly how to integrate proxies with Axios, handle rotation, bypass geo-restrictions, and avoid the common pitfalls that break scrapers at scale.

⚠️ Actual proxies used in this articles are dummy ones that I replaced the ones I found on Google by typing “free proxies” with.

Setting Up Proxy Configuration in Axios

Axios makes proxy configuration surprisingly straightforward through its built-in proxy support, but understanding the configuration object structure and authentication will save you hours of troubleshooting.

Unlike other HTTP clients, Axios handles proxy configuration through a dedicated proxy object that you can set when creating an instance or per individual request.

The basic proxy configuration requires three essential fields:

  • host: The proxy server’s IP address
  • port: The port number the proxy service runs on
  • protocol: Either http or https depending on your proxy provider

Here’s how to set up a basic proxy with axios.create():

const axios = require('axios');

// Create an Axios instance with proxy configuration
const client = axios.create({
  proxy: {
    protocol: 'http',
    host: '198.37.121.89',
    port: 8080
  },
  timeout: 10000
});

// Test the proxy configuration
async function checkProxy() {
  try {
    const response = await client.get('https://httpbin.org/json');
    console.log('Proxy working! Response:', response.data);
  } catch (error) {
    console.error('Proxy failed:', error.message);
  }
}

checkProxy();

When this code runs successfully, you’ll see output showing the request went through your proxy server:

{
  "origin": "198.37.121.89"
}

Handling Proxy Authentication and HTTPS

Many proxy providers require authentication with username and password credentials to access their proxy servers, and Axios makes this configuration clean and secure.

Most commercial proxy services use basic authentication where you provide a username and password along with the proxy host and port. Axios handles this through the auth object within your proxy configuration.

Here’s how to configure authenticated proxies in Axios:

const axios = require('axios');

// Configure proxy with authentication
const authenticatedClient = axios.create({
  proxy: {
    protocol: 'http',
    host: '198.37.121.89',
    port: 8080,
    auth: {
      username: 'your-proxy-username',
      password: 'your-proxy-password'
    }
  },
  timeout: 15000
});

// Test authenticated proxy
async function testAuthenticatedProxy() {
  try {
    const response = await authenticatedClient.get('https://httpbin.org/headers');
    console.log('Headers received:', response.data.headers);
  } catch (error) {
    console.error('Authentication failed:', error.message);
  }
}

testAuthenticatedProxy();

When testing with https://httpbin.org/headers, you can verify that your proxy headers are being passed correctly and your authentication is working.

Axios automatically handles HTTPS proxy tunneling through the CONNECT method, which means your HTTPS requests are securely tunneled through the proxy server without the proxy being able to decrypt your traffic. This is crucial for maintaining security when scraping HTTPS websites through third-party proxy services.

Rotate Proxies with a Proxy Pool Manager in Axios

Production scraping requires managing multiple proxies efficiently rather than relying on single requests that inevitably get rate limited or blocked.

The key is building a smart proxy manager that handles selection and rotation based on rate limits, automatically switching between healthy proxies when one gets blocked.

Here’s a simple but effective ProxyManager class that handles proxy selection and rotation:

const axios = require('axios');

class ProxyManager {
  constructor(proxies) {
    this.proxies = proxies;
    this.currentIndex = 0;
  }

  // Get the next proxy in rotation
  getNextProxy() {
    const proxy = this.proxies[this.currentIndex];
    this.currentIndex = (this.currentIndex + 1) % this.proxies.length;
    return proxy;
  }

  // Create Axios client with rotated proxy
  createClient() {
    const proxy = this.getNextProxy();
    return axios.create({
      proxy: proxy,
      timeout: 10000,
      headers: {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'
      }
    });
  }
}

// Set up proxy pool
const proxyList = [
  { protocol: 'http', host: '198.37.121.89', port: 8080 },
  { protocol: 'http', host: '203.45.67.12', port: 3128 },
  { protocol: 'http', host: '185.162.231.45', port: 8080 },
  { protocol: 'http', host: '192.168.1.100', port: 8080 }, // This one will fail
  { protocol: 'http', host: '45.32.101.24', port: 8080 }
];

const proxyManager = new ProxyManager(proxyList);

// Demonstrate rotation with 15 requests
async function testProxyRotation() {
  for (let i = 1; i <= 15; i++) {
    const client = proxyManager.createClient();
    const currentProxy = proxyManager.proxies[(proxyManager.currentIndex - 1 + proxyManager.proxies.length) % proxyManager.proxies.length];

    console.log(`Request ${i} using proxy ${currentProxy.host}:${currentProxy.port}`);

    try {
      const response = await client.get('https://scrapingtest.com/cloudflare-rate-limit');
      console.log(`  ✓ Status: ${response.status}`);
    } catch (error) {
      console.log(`  ✗ Failed: ${error.message}`);
    }

    // Small delay between requests
    await new Promise(resolve => setTimeout(resolve, 1000));
  }
}

testProxyRotation();

Here’s what the output looks like when running this rotation:

Request 1 using proxy 198.37.121.89:8080
   Status: 200
Request 2 using proxy 203.45.67.12:3128
   Status: 200
Request 3 using proxy 185.162.231.45:8080
   Status: 200
Request 4 using proxy 192.168.1.100:8080
   Failed: connect ECONNREFUSED 192.168.1.100:8080
Request 5 using proxy 45.32.101.24:8080
   Status: 200
Request 6 using proxy 198.37.121.89:8080
   Status: 429
Request 7 using proxy 203.45.67.12:3128
   Status: 200
Request 8 using proxy 185.162.231.45:8080
   Status: 429
Request 9 using proxy 192.168.1.100:8080
   Failed: connect ECONNREFUSED 192.168.1.100:8080
Request 10 using proxy 45.32.101.24:8080
   Status: 200
Request 11 using proxy 198.37.121.89:8080
   Status: 200
Request 12 using proxy 203.45.67.12:3128
   Status: 200
Request 13 using proxy 185.162.231.45:8080
   Status: 200
Request 14 using proxy 192.168.1.100:8080
   Failed: connect ECONNREFUSED 192.168.1.100:8080
Request 15 using proxy 45.32.101.24:8080
   Status: 200

This ProxyManager automatically cycles through your proxy pool, so when one proxy hits a rate limit (429) or fails completely, the next request uses a different proxy.

You’ll notice that 2-3 proxies fail during the demo, which shows realistic failure handling. This is completely normal in production scraping environments.

Bypassing Anti-Bot with Proxies in Axios

The challenge page we sent requests to was one that only had rate limiting as a security feature, but in the real world websites use sophisticated systems dedicated to detecting anti-bots through IP quality and user behavior analysis.

Modern anti-bot systems analyze everything from your request patterns to your IP reputation, TLS fingerprints, and browser characteristics. Simple proxy rotation often isn’t enough to bypass these advanced protections.

Using the same ProxyManager script from before, let’s test it against a more sophisticated anti-bot system by sending 15 requests to https://scrapingtest.com/cloudflare-challenge.

This will fail with 403 Forbidden all 15 times:

Request 1: Status 403
Request 2: Status 403
Request 3: Status 403
Request 4: Status 403
Request 5: Status 403
Request 6: Status 403
Request 7: Status 403
Request 8: Status 403
Request 9: Status 403
Request 10: Status 403
Request 11: Status 403
Request 12: Status 403
Request 13: Status 403
Request 14: Status 403
Request 15: Status 403

Simple proxies and headers will not bypass any WAF out there. For production scraping against protected sites, you need dedicated anti-bot bypass tools such as Scrape.do.

Here’s how to handle the same requests through Scrape.do:

const axios = require('axios');

async function testScrapeDo() {
  const token = '<your-token>';
  const targetUrl = 'https://scrapingtest.com/cloudflare-challenge';

  for (let i = 1; i <= 10; i++) {
    const config = {
      method: 'get',
      url: 'http://api.scrape.do/',
      params: {
        url: targetUrl,
        token: token,
        super: true,
        render: true
      },
      headers: {}
    };

    try {
      const response = await axios(config);
      console.log(`Request ${i}: Status ${response.status}`);
    } catch (error) {
      console.log(`Request ${i}: Failed`);
    }

    await new Promise(resolve => setTimeout(resolve, 1000));
  }
}

testScrapeDo();

Each request gets 200 OK:

Request 1: Status 200
Request 2: Status 200
Request 3: Status 200
Request 4: Status 200
Request 5: Status 200
Request 6: Status 200
Request 7: Status 200
Request 8: Status 200
Request 9: Status 200
Request 10: Status 200

Scrape.do handles proxy rotation, browser fingerprinting, and anti-bot bypass automatically, so you can focus on extracting data instead of fighting detection systems.

Conclusion

Proxies are a very important component of web scraping, and building a robust proxy management system with Axios requires handling authentication, rotation, and failure scenarios properly.

But managing proxy pools, dealing with rate limits, and bypassing modern anti-bot systems takes significant development time and ongoing maintenance that most Node.js developers don’t want to handle.

Solutions like Scrape.do take the headache of web scraping away, bypassing geo-blocks and anti-bots easily so you can focus on building your application instead of fighting proxy infrastructure.

Get 1000 free credits and start scraping with Scrape.do


Mert Bekci

Mert Bekci

Lead Software Engineer


I’m the Lead Software Engineer of Scrape.do, with over 13 years of experience in the software industry.

I’m married and have a son. When I have free time (if my son lets me!), I like reading and sometimes writing technical articles. And yes, I still enjoy watching anime, even if some people call it “cartoons.”