Scrape Google Search Results Using Python (Organic, Paid, FAQs, Related Terms)
Google Search is where your users go, where your competitors live, and where the most valuable data on the internet hides in plain sight.
But Google doesn’t hand that data over easily.
From organic links and paid ads to FAQs and related terms, every SERP is packed with insights—if you know how to extract them.
In this guide, you’ll learn how to scrape Google Search results using Python. We’ll cover organic listings, ads, “People also ask”, related searches, and more.
We’ll use clean, structured requests powered by Scrape.do.
Why You Need to Scrape Google Search Results
Search results aren’t just answers; they’re live, structured snapshots of what people care about right now.
Scraping them gives you access to that attention.
Improving SEO Strategy
If you’re serious about search, relying on third-party rank trackers isn’t enough.
Scraping actual SERPs lets you see how your pages are displayed, what titles and descriptions Google shows, which competitors appear next to you, and whether features like snippets or FAQs are pushing your links down.
You get full control and visibility on your terms, and in real time.
Reviewing Marketing Campaigns
Organic and paid listings live on the same page and if you’re running ads, you need to see how they show up.
Scraping lets you track your ad placements, identify what competitors are doing, monitor changes over time, and even archive creative variations.
Whether you’re running hundreds of campaigns or just a few key terms, it’s a vital layer of insight most dashboards don’t give you.
Training LLMs or AI Systems
For LLM pipelines or real-time generative tools, scraped SERPs are an excellent source of fresh, intent-rich text.
You can use them to generate domain-specific corpora, build RAG agents that ground outputs with current info, or train ranking models on how Google orders results.
Search engines are optimized for relevance and scraping them means you’re capturing high-quality data curated by design.
Discovering Content and Trends
Google is a content aggregator.
If a new article drops, a product launches, or a trend starts moving, it will show up there first often before it gets indexed elsewhere.
Scraping lets you discover URLs you wouldn’t catch through crawling, and surface content aligned with real user searches. It’s an underrated but powerful way to stay ahead.
Prerequisites & Setup
Before we start scraping, let’s make sure your environment is ready.
We’ll be using Python, along with two libraries:
requests
to send HTTP requestsBeautifulSoup
frombs4
to parse the HTML
Install them with:
pip install requests beautifulsoup4
We’re also going to use Scrape.do to handle Google’s anti-bot protections. Scrape.do takes care of:
- rotating proxies,
- header management,
- and bypassing rate limits or redirects
…so you can send a single request and get a clean response.
If you don’t already have an API token, sign up here for free and get 1000 monthly credits. You’ll need to replace "<your-token>"
in the code examples with your actual Scrape.do token.
Once you have your token and libraries installed, you’re ready to start.
Scraping First Page Organic Results
Let’s start with the core of Google Search: the organic results.
Each organic search result is rendered inside a div
with the class Ww4FFb
, which acts as a reliable anchor when parsing the HTML. Within this block, you’ll find the title, URL, and description; everything you need to build your own search scraper.
Here’s a simple script that does exactly that using Scrape.do and BeautifulSoup:
import requests
import urllib.parse
from bs4 import BeautifulSoup
# Your Scrape.do token and search query
scrape_token = "<your-token>"
query = "python web scraping"
# Encode the search query and build Google URL
encoded_query = urllib.parse.quote_plus(query)
google_url = f"https://www.google.com/search?q={encoded_query}&start=0" # start=0 for first page
# Scrape.do wrapper URL - properly encode the Google URL
api_url = f"https://api.scrape.do/?token={scrape_token}&url={urllib.parse.quote(google_url, safe='')}"
# Send the request
response = requests.get(api_url)
response.raise_for_status()
# Parse the HTML
soup = BeautifulSoup(response.text, 'html.parser')
# Find all search results with Ww4FFb class
search_results = soup.find_all('div', class_=lambda x: x and 'Ww4FFb' in x)
# Extract data from each result
for position, result in enumerate(search_results, 1):
# Get title from h3 tag
title = result.find('h3').get_text(strip=True)
# Get URL from link
url = result.find('a').get('href')
# Get description/snippet
desc_element = result.find(class_='VwiC3b')
description = desc_element.get_text(strip=True) if desc_element else "No description"
print(f"{position}. {title}")
print(f" URL: {url}")
print(f" Description: {description}")
print()
When run, this will print something like:
1. Python Web Scraping Tutorial
URL: https://www.geeksforgeeks.org/python-web-scraping-tutorial/
Description: Jan 2, 2025—Python web scraping refers to the process of extracting data from websites using Python...
2. How to start Web scraping with python? : r/learnpython
URL: https://www.reddit.com/r/learnpython/comments/qzr8ir/how_to_start_web_scraping_with_python/
Description: Learn the basic html elements that build up a website. Inspect the element on the webpage that...
3. Python Web Scraping: Full Tutorial With Examples (2025)
URL: https://www.scrapingbee.com/blog/web-scraping-101-with-python/
Description: May 27, 2025—Learn about web scraping in Python with this step-by-step tutorial...
You now have a working scraper that extracts the full set of organic results from the first page of Google. In the next step, we’ll expand it to go beyond page one.
Scraping All Organic Google Search Results
Scraping a single page is useful but often not enough.
If you want the full picture, you’ll need to go beyond the first 10 results and paginate through all available pages.
Fortunately, Google structures its result pages in a predictable way: each new page is accessible by increasing the start
parameter by 10 (e.g., start=10
, start=20
, etc.).
We’ll take the code from the previous step and modify it so it loops through each page until there are no more results.
Here’s what we changed:
- Introduced a
while True
loop to go through result pages - Automatically stops when no new results are found
- Stores and prints every result with position, title, URL, and description
Here’s the complete code:
import requests
import urllib.parse
from bs4 import BeautifulSoup
import time
# Your Scrape.do token and search query
scrape_token = "<your-token>"
query = "python web scraping"
# Starting page
start = 0
all_results = []
while True:
print(f"Scraping page {(start // 10) + 1} (results {start + 1}-{start + 10})...")
# Encode the search query and build Google URL
encoded_query = urllib.parse.quote_plus(query)
google_url = f"https://www.google.com/search?q={encoded_query}&start={start}"
# Scrape.do wrapper URL - properly encode the Google URL
api_url = f"https://api.scrape.do/?token={scrape_token}&url={urllib.parse.quote(google_url, safe='')}"
# Send the request
response = requests.get(api_url)
response.raise_for_status()
# Parse the HTML
soup = BeautifulSoup(response.text, 'html.parser')
# Find all search results with Ww4FFb class
search_results = soup.find_all('div', class_=lambda x: x and 'Ww4FFb' in x)
# Break if no results found
if not search_results:
print("No more results found!")
break
# Extract data from each result
for result in search_results:
title = result.find('h3').get_text(strip=True)
url = result.find('a').get('href')
desc_element = result.find(class_='VwiC3b')
description = desc_element.get_text(strip=True) if desc_element else "No description"
all_results.append({
'position': len(all_results) + 1,
'title': title,
'url': url,
'description': description
})
start += 10
time.sleep(1) # optional delay to mimic human browsing
# Print all results
print(f"\n=== Found {len(all_results)} total results ===\n")
for result in all_results:
print(f"{result['position']}. {result['title']}")
print(f" URL: {result['url']}")
print(f" Description: {result['description']}")
print()
This script goes through each search result page and stops automatically when it reaches the end.
In the next section, we’ll switch focus from organic listings to paid ads—because scraping those requires a slightly different approach.
Scraping Search Ads from Google Search
Paid results on Google (search ads) appear above or below the organic results. They’re often just as valuable, especially if you’re analyzing competitor campaigns, tracking your own ads, or building datasets for ad intelligence.
Unlike organic results, ads don’t follow the same HTML structure. They live inside elements with the class uEierd
, and the internal layout can vary slightly between different ad formats.
But there’s a pattern.
We’ll use Scrape.do to fetch the page, then extract all ad containers, and inside each, we’ll look for:
- The ad’s destination URL
- Its title
- The description text
- The displayed (vanity) URL
Let’s walk through the code:
import requests
import urllib.parse
from bs4 import BeautifulSoup
# Your Scrape.do token and search query
scrape_token = "<your-token>"
query = "python web scraping"
# Encode the search query and build Google URL
encoded_query = urllib.parse.quote_plus(query)
google_url = f"https://www.google.com/search?q={encoded_query}"
# Scrape.do wrapper URL
api_url = f"https://api.scrape.do/?token={scrape_token}&url={urllib.parse.quote(google_url, safe='')}"
# Send the request
response = requests.get(api_url)
response.raise_for_status()
# Parse the HTML
soup = BeautifulSoup(response.text, 'html.parser')
# Find all search ads using uEierd class
search_ads = soup.find_all('div', class_='uEierd')
print(f"Found {len(search_ads)} search ads")
# Extract ad information
for position, ad in enumerate(search_ads, 1):
print(f"\n--- Ad {position} ---")
# Extract URL
url_element = ad.find('a')
url = url_element.get('href') if url_element else "URL: Not found"
print(f"URL: {url}")
# Extract title
title_element = ad.find(['h3', 'div'], class_=lambda x: x and any(cls in str(x) for cls in ['CCgQ5', 'vCa9Yd', 'QfkTvb']))
if not title_element:
title_element = ad.find(['h1', 'h2', 'h3', 'h4', 'h5', 'h6'])
title = title_element.get_text(strip=True) if title_element else "Title: Not found"
print(f"Title: {title}")
# Extract description
description_element = ad.find('div', class_=lambda x: x and any(cls in str(x) for cls in ['Va3FIb', 'r025kc', 'lVm3ye']))
if not description_element:
description_spans = ad.find_all('span')
description_texts = [
span.get_text(strip=True)
for span in description_spans
if len(span.get_text(strip=True)) > 20 and span.get_text(strip=True) not in ['Sponsored', 'Ad']
]
description = ' '.join(description_texts) if description_texts else "Description: Not found"
else:
description = description_element.get_text(strip=True)
print(f"Description: {description}")
# Extract displayed (vanity) URL
display_url_element = ad.find('span', class_=lambda x: x and 'qzEoUe' in str(x))
display_url = display_url_element.get_text(strip=True) if display_url_element else "Display URL: Not found"
print(f"Display URL: {display_url}")
And here’s a simulated output for two sample ads (for clarity):
--- Ad 1 ---
URL: https://www.datacamp.com/learn/python
Title: Learn Python for Data Science | Online Courses - DataCamp
Description: Gain hands-on experience with Python. Start learning with interactive exercises and projects today.
Display URL: www.datacamp.com
--- Ad 2 ---
URL: https://www.udemy.com/course/python-for-data-science/
Title: Python for Data Science | Udemy
Description: Enroll in our bestselling course. Taught by industry experts. Lifetime access included.
Display URL: www.udemy.com
These are just examples—but the structure stays the same across most queries.
Next up, we’ll move on to extracting Google’s FAQ blocks and related search terms—two highly underrated sources of insight.
Scraping FAQs and Related Search Terms
Google doesn’t just show links but it shows what users are really asking.
Boxes like “People also ask” and related searches at the bottom of the page give you direct insight into search intent.
They reveal follow-up questions, keyword variations, and what else your audience is curious about. For SEO, content strategy, and user research, this data is gold.
Scrape Frequently Asked Questions from Google
Google’s FAQ block, the one with expandable questions, uses a specific JS tag called yEVEwb
.
This is a good target for extracting the question texts themselves. While answers are loaded dynamically when you expand them, the questions are already embedded in the page, so we can scrape them without JavaScript rendering.
Here’s the code:
import requests
import urllib.parse
from bs4 import BeautifulSoup
# Your Scrape.do token and search query
scrape_token = "<your-token>"
query = "python web scraping"
# Encode the search query and build Google URL
encoded_query = urllib.parse.quote_plus(query)
google_url = f"https://www.google.com/search?q={encoded_query}"
# Scrape.do wrapper URL - properly encode the Google URL
api_url = f"https://api.scrape.do/?token={scrape_token}&url={urllib.parse.quote(google_url, safe='')}"
# Send the request
response = requests.get(api_url)
response.raise_for_status()
# Parse the HTML
soup = BeautifulSoup(response.text, 'html.parser')
# Find all FAQ sections with yEVEwb
faq_results = soup.find_all('div', jsname='yEVEwb')
# Extract FAQ questions
for position, faq in enumerate(faq_results, 1):
question_element = faq.find('span')
if question_element:
question = question_element.get_text(strip=True)
print(f"{position}. {question}")
Here’s what the output looks like:
1. Is Python good for web scraping?
2. How to scrape the web with Python?
3. Why is data scraping illegal?
4. How long does it take to learn web scraping in Python?
Scrape Related Search Terms from Google
If you scroll to the bottom of any Google results page, you’ll see a block of suggested queries, often labeled “Searches related to…”
These related terms are excellent for expanding your keyword coverage. They reflect how users explore a topic, what phrases they use, and where their curiosity goes next.
To extract them, we look for div
elements with the classes b2Rnsc vIifob
. Each one contains a single search suggestion.
Here’s how to scrape them:
import requests
import urllib.parse
from bs4 import BeautifulSoup
# Your Scrape.do token and search query
scrape_token = "<your-token>"
query = "python web scraping"
# Encode the search query and build Google URL
encoded_query = urllib.parse.quote_plus(query)
google_url = f"https://www.google.com/search?q={encoded_query}"
# Scrape.do wrapper URL
api_url = f"https://api.scrape.do/?token={scrape_token}&url={urllib.parse.quote(google_url, safe='')}"
# Send the request
response = requests.get(api_url)
response.raise_for_status()
# Parse the HTML
soup = BeautifulSoup(response.text, 'html.parser')
# Find all related search terms
related_searches = soup.find_all('div', class_='b2Rnsc vIifob')
print(f"Found {len(related_searches)} related search terms")
# Extract related search terms
for position, search_term in enumerate(related_searches, 1):
term_text = search_term.get_text(strip=True)
print(f"{position}. {term_text}")
And here’s the sample output:
Found 8 related search terms
1. Python web scrapingbook
2. Python web scrapinglibrary
3. Python web scrapingSelenium
4. Web scrapingusingPythonBeautifulSoup
5. Web scraping Pythonw3schools
6. Python web scrapinggithub
7. Web scrapingwithPythonPDF
8. Python web scrapingexamples
You can use these suggestions to generate long-tail keywords, cluster related content, or explore adjacent topics your users are already searching for.
Conclusion
Scraping Google Search gives you unmatched access to what people are searching, seeing, and clicking.
From organic results to ads, FAQs, and related terms, you now have the tools to extract it all cleanly and reliably.
And with Scrape.do, you don’t need to worry about:
- Rotating proxies or fake headers
- Getting blocked mid-way
- Handling JavaScript-rendered content
Just send the request and get the data.