How to Scrape Ubersuggest SEO Data with Python
This guide is for experimental and learning purposes. We haven’t engaged in and don’t condone any commercial activities with the data extracted using these methods.
Ubersuggest is one of the most popular SEO tools on the web by the SEO Guru, Neil Patel.
But try scraping it, and you’ll hit a wall.
Not just one wall, actually. Several layers of anti-bot protection, JavaScript rendering traps, and authorization tokens that expire without warning.
In this guide, we’ll walk through how to bypass all of it using Scrape.do and show you how to access Ubersuggest’s internal API for keyword data, CPCs, SERP analysis, and more.
Why Is Scraping Ubersuggest Difficult?
Ubersuggest isn’t just trying to block bots, it’s trying to make sure you never even get to the content.
It checks your IP, inspects your headers, monitors browser behavior, and runs multiple layers of JavaScript challenges before serving any data.
Let’s break it down.
Heavy Protection Against Bots
Ubersuggest performs several checks the moment your request hits their server.
First, your IP address is inspected for quality.
If you’re using a known datacenter IP (like from AWS, Azure, or DigitalOcean), you’re likely to get blocked instantly. Residential and ISP IPs might survive a bit longer—but only if your requests also look human.
Then comes the header inspection.
Ubersuggest expects a real browser signature: consistent headers like User-Agent
, Accept-Language
, Sec-Ch-Ua
, and others that match genuine browser traffic. If these look off (or worse, too clean or too fake) your request gets flagged.
Most basic scraping scripts using requests
or curl
will fail here. You’ll get a 403 or a cryptic yet promotional error page like this:
JavaScript Challenges and Adblocker Check
Even if you manage to slip past the initial bot checks with the right IP and headers, Ubersuggest doesn’t serve the content right away.
Instead, it loads a barebones page and runs a heavy JavaScript bundle that decides what you’re allowed to see.
You need a JavaScript-capable browser just to reach the actual data. So if you’re using something like requests
or even curl
, you’ll just see a loading spinner or an empty shell.
To get around this, most scrapers turn to headless browsers with stealth plugins like Puppeteer with puppeteer-extra-plugin-stealth
or Playwright in stealth mode. This can render the page, execute JS, and reveal the content.
But Ubersuggest has one more trick: adblocker detection.
If it detects that your browser has blocked an ad or certain trackers, it simply won’t serve you the real content. You’ll be stuck looking at a blank result or redirected elsewhere.
And even if you get through all that, the real data isn’t embedded in the HTML. Everything is loaded dynamically through JavaScript-based API calls from multiple sources so scraping the visible page directly still won’t give you the full dataset.
How Scrape.do Bypasses These
Scraping Ubersuggest is hard.
But Scrape.do makes it simple by doing the heavy lifting for you.
Here’s how it breaks through the layers:
- 🔁 Rotating proxies and headers: Every request goes out with a fresh IP (residential, mobile, or ISP) and realistic headers that match real browser traffic. No setup needed.
- 📺 Full browser rendering: Scrape.do can render the page like a real browser using
render=true
, executing JavaScript, loading dynamic content, and waiting until the page is fully ready. - 🔐 Custom headers and authorization: If the data requires a Bearer token or special access headers, you can attach them using Scrape.do’s
extraHeaders=true
feature. Just prefix any custom header withsd-
and Scrape.do handles the rest. - 📃 Clean JSON output: Instead of manually parsing a rendered page, you can scrape the internal API endpoints directly. Scrape.do handles the request and returns structured, ready-to-use JSON so you get only what you need—volume, CPC, SERP, or keyword ideas.
Whether you’re scraping one keyword or scaling across thousands, Scrape.do lets you bypass the blocks, keep your scraper simple, and focus on the data.
Scrape Ubersuggest SEO Data
Instead of trying to parse rendered HTML or battle against DOM obfuscation, the best way to extract Ubersuggest data is to go straight to the source: their internal API.
One of the key endpoints used by the frontend is:
https://app.neilpatel.com/api/keyword_info?keyword=ubersuggest&language=en&locId=2840&withGlobalSVBreakdown=true
This endpoint returns a full keyword overview including:
- Search volume
- Cost-per-click (CPC)
- Search breakdown by location
All you need to do is set up a scraper that can access this API and pass the correct authorization header.
Prerequisites
You won’t need much to get started, just a few essentials:
- The
requests
library to make HTTP requests - A Scrape.do API token (you can get 1000 monthly free credits here)
- A Bearer token that lets you access Ubersuggest’s internal API
We’ll first show you how to find that Bearer token automatically. Then we’ll send a request with Scrape.do using that token to extract the data we need.
Finding a Bearer Token
Ubersuggest’s internal APIs are protected with a Bearer token—a unique access key generated dynamically when the page loads.
To scrape these APIs, we need to get a valid token first.
Fortunately, there’s a public endpoint that issues one without logging in:
import requests
# Scrape.do token and target URL
scrape_token = "<your-token>"
target_url = "https://app.neilpatel.com/api/get_token"
# Scrape.do wrapper URL
api_url = f"https://api.scrape.do/?token={scrape_token}&url={target_url}"
# Send the request
response = requests.get(api_url)
response.raise_for_status()
# Parse and print the token
data = response.json()
print("Token:", data["token"])
You’ll get something like this:
Token: app#unlogged__XXXXXXXXXXXXXXXXXXXXXXXXX
This token will work for a good number of API calls but it’s temporary and will eventually expire.
When it does, just repeat the same request to get a fresh one or get it automatically for every few calls.
⚠️ If Ubersuggest ever blocks this token endpoint or changes how it works, you can still extract a valid token from the page itself. We’ll cover that fallback method in our last heading.
Sending Your Request
Once you have your Bearer token, you’re ready to scrape the Ubersuggest keyword data.
We’ll use Scrape.do’s extraHeaders=true
feature, which allows you to attach custom headers to your request. Just prefix them with sd-
, and Scrape.do will handle the rest.
Here’s the full code to send the request and extract what we need:
import requests
import urllib.parse
# Scrape.do token and Bearer token
scrape_token = "<your-token>"
bearer_token = "app#unlogged__XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" # Replace with real token
# Target API URL
target_url = "https://app.neilpatel.com/api/keyword_info?keyword=ubersuggest&language=en&locId=2840&withGlobalSVBreakdown=true"
encoded_url = urllib.parse.quote_plus(target_url)
# Scrape.do API call with Authorization header
api_url = f"https://api.scrape.do/?token={scrape_token}&url={encoded_url}&extraHeaders=true"
headers = {
"sd-Authorization": f"Bearer {bearer_token}"
}
# Make request and parse response
response = requests.get(api_url, headers=headers)
data = response.json()
# Extract and print volume and CPC
keyword_info = data.get("keywordInfo", {})
volume = keyword_info.get("volume")
cpc = keyword_info.get("cpc")
print("Search Volume:", volume)
print("CPC:", cpc)
Expected output:
Search Volume: 27100
CPC: 6.67
That’s it, you now have access to real-time keyword data from Ubersuggest’s backend.
Get Access to Even MORE SEO Data
The keyword_info
endpoint is just the beginning. Ubersuggest exposes several other internal API endpoints that return different kinds of SEO data.
All accessible with the same Bearer token and Scrape.do setup, but might require extra headers.
Here are some of the key endpoints you can hit:
# General keyword info
https://app.neilpatel.com/api/keyword_info?keyword=ubersuggest&language=en&locId=2840&withGlobalSVBreakdown=true
# Content ideas (based on the keyword)
https://app.neilpatel.com/api/content_ideas_pages
# SERP results (1–100)
https://app.neilpatel.com/api/serp_analysis?keyword=seo%20breakdown&locId=2840&language=en&refresh=false
# Top 10 results backlink breakdown
https://app.neilpatel.com/api/domain_counts
# Related keywords and questions
https://app.neilpatel.com/api/keyword_suggestions_info_task_status
Some of these include the keyword directly in the URL (like serp_analysis
); others don’t (like content_ideas_pages
). In both cases, you’ll often need to include specific headers to make the request work.
That’s where Scrape.do’s extraHeaders=true
flag can help, you can add anything you need and let the API handle the rest.
Once you have the right endpoint and a valid Bearer token, you can extract everything from related keywords and content ideas to backlink data and SERP snapshots.
Possible Errors and Fixes
Scraping internal APIs is powerful but also fragile.
Ubersuggest can change its structure anytime, expire tokens faster, or block endpoints without notice.
Here’s how to handle the most common issues.
Bearer Token Expired / API Call Blocked or Changed
If your token stops working or the endpoint returns an error, you’ll need to fetch a fresh token and retry the request.
Here’s a simple system that:
- Tries using a cached token,
- If it fails, gets a new one,
- Then retries the request.
import requests
import urllib.parse
# Scrape.do token
scrape_token = "<your-token>"
# Keyword API and Token API URLs
keyword_url = "https://app.neilpatel.com/api/keyword_info?keyword=ubersuggest&language=en&locId=2840&withGlobalSVBreakdown=true"
token_url = "https://app.neilpatel.com/api/get_token"
# Encode keyword URL
encoded_keyword_url = urllib.parse.quote_plus(keyword_url)
scrape_keyword_api = f"https://api.scrape.do/?token={scrape_token}&url={encoded_keyword_url}&extraHeaders=true"
# Function to fetch fresh token
def fetch_token():
token_api = f"https://api.scrape.do/?token={scrape_token}&url={token_url}"
resp = requests.get(token_api)
resp.raise_for_status()
return resp.json()["token"]
# Function to get keyword data
def get_keyword_data(bearer_token):
headers = {"sd-Authorization": f"Bearer {bearer_token}"}
resp = requests.get(scrape_keyword_api, headers=headers)
if resp.status_code != 200:
raise RuntimeError(f"Request failed with status {resp.status_code}")
return resp.json()
# Run with token handling
cached_token = None
try:
if cached_token is None:
cached_token = fetch_token()
data = get_keyword_data(cached_token)
except Exception:
cached_token = fetch_token()
data = get_keyword_data(cached_token)
# Extract and print volume and CPC
keyword_info = data.get("keywordInfo", {})
volume = keyword_info.get("volume")
cpc = keyword_info.get("cpc")
print("Search Volume:", volume)
print("CPC:", cpc)
If Ubersuggest makes small changes, this retry logic keeps your scraper alive.
API Structure Changed
If Ubersuggest suddenly breaks your scraper and none of the previous methods work, the API may have changed.
You’ll need to investigate manually.
Here’s how to reverse-engineer it:
-
Open Developer Tools: Visit Ubersuggest, open DevTools (
F12
), go to the Network tab, and filter byfetch
orxhr
. -
Trigger a fresh search: Type a new keyword into the search bar. Watch the Network tab—the requests to internal APIs will show up.
-
Look at headers and URLs: Click into the request. Copy the full API URL and note any special headers, especially
Authorization
,Origin
,Referer
, or custom keys. -
Rebuild your scraper
Use Scrape.do withextraHeaders=true
. Add every necessary header by prefixing it withsd-
:headers = { "sd-Authorization": "Bearer <token>", "sd-Origin": "https://app.neilpatel.com", "sd-Referer": "https://app.neilpatel.com/en/ubersuggest/" }
If all else fails, there’s a fallback:
You can scrape the page directly and extract the token or data from rendered JavaScript. Just modify your Scrape.do call like this:
&super=true&render=true&waitUntil=networkidle0&returnJSON=true
That tells Scrape.do to:
- Use premium proxies (
super=true
) - Render JS like a real browser (
render=true
) - Wait until the page fully loads (
waitUntil=networkidle0
) - Return all browser network requests (
returnJSON=true
)
Then, just search for the Bearer token inside the returned JSON, extract it, and use it in future API requests.
⚠️ If that still doesn’t work, try adding blockResources=false
to prevent Ubersuggest’s script blockers from interfering with the rendering.
Conclusion
Ubersuggest is protected by IP reputation checks, JavaScript rendering, adblocker traps, and rotating token requirements.
But with Scrape.do, you can skip the complexity and go straight to the data.
- Bypass all bot protection using rotating residential proxies
- Render JavaScript-heavy pages or extract internal APIs directly
- Attach custom headers like Bearer tokens with zero setup
- Get structured JSON responses; clean, fast, and ready to use
Start scraping the toughest domains now with 1000 free credits.