BACK
Categories: Web scraping, Tutorials

How to Scrape Data with Python? Build Python Web Scraper

8 mins read Created Date: February 22, 2023   Updated Date: February 22, 2023
As Scrape.do, we took care to explain in detail how to create a web scraper in Python with this article. We also aimed to teach you how to extract data with BeautifulSoup and save everything in a JSON and CSV file.

You can take your academic career and business further with the web scraping process that helps you to automatically extract all the data you need and pre-determined from the internet. You can also use web scraping for a machine learning project, to create a price comparison, or for any other innovative idea. It is possible that you can also do web scraping manually to obtain data, but a lot of content on the internet will quickly distract you from this idea. If you don’t want to hire a web scraping service directly, you need to learn how to create a web scraper with the help of codes.

As Scrape.do, we took care to explain in detail how to create a web scraper in Python with this article. We also aimed to teach you how to extract data with BeautifulSoup and save everything in a JSON and CSV file. It would still make the most sense to use a web scraping service rather than create a web scraper, so get in touch with us!

What You Need to Create a Web Scraper

If you want to create a web scraper, first you need to make sure that Python is installed on your computer. Although Python 3 is automatically installed in all Ubuntu 20.04 and Linux versions, this is not the case in Windows and macOS operating systems. If you want to find out if you have python on your computer, you will get an output like “Python 3.8.2” when you enter the “python3 –v” command in the script.

In order to use your web scraper more conveniently and to be able to select certain data, you need to install BeautifulSoup, simply write the code “pip3 install beautifulsoup4”. You should also use Selenium to get dynamically loaded content and write the code “pip3 install selenium” to install this package.

As the last step, you need to make sure you have Google Chrome and Chrome Driver on your computer, if you want to use Selenium to scrape data from dynamically loaded content, you will need Google Chrome and Chrome driver.

How Do You Insert Pages Of The Website After Choosing The Website You Want?

If you have everything necessary to create a web scraper installed on your computer, you need to choose a website that suits your needs. You should know that you should start using everything you learn by keeping in mind that each website structures its content differently. The code of each website has minor changes. Since we’re here today to teach you how to do web scraping, we’ve chosen the top ten movies from IMDb’s top 250 movies list as example data. We will first get the titles from this website and then extract the data from each movie’s page, we will also need to use JavaScript to access some of the data. First, you must right-click on the first title in the list and select “Inspect Element” to understand the structure of the content. You will see exactly something like this:

e6DE3zczzQa-VSBIynK-fR4oyAjVbpx2PztpEDKbi3K0NII9_lFkFhGQmiOjc_-Y_Kg26cM3pecnSKNiPlLZGpntqVKUrcX9E4gDWaTsolWoCFzEGruzUvrEEhl3

Then you have to press CTRL and F keys at the same time and search the HTML code structure, find the tag on the page because this part tells you how to access the data. To get all the titles on the page, we need an HTML selector table tbody tr td.titleColumn a. By using this CSS selector and using the inset of each anchor you will be able to access the titles you want. To simulate these headers you should write the following code: document.querySelectorAll(“table tbody tr td.titleColumn a”)[0].innerText

After typing in your code, you will encounter a string of letters like “T1pgLUXJHX_s3gubDKvBjwkWeK1neZxiysoneD2Q1NU3Sj_pD8defdKorTlcsiiqShlmPDEeCu3Goo5T9CgzPKCml9dq_kCCu7KUyTx7uQr6J8Zn-a string of letters and a series of numeralsNNx7KUyTx7uSr6JBmn-N. Once you have this part, called the selector, you can start writing your Python code.

Using BeautifulSoup to Extract Statically Loaded Content

The top ten movie content in IMDb’s top 250 movies list, as we have determined for you, is static content. In order to understand whether the content is static or not, you must press the CTRL and U keys simultaneously on the page and select View Page Source. When you look here, you will see that all the titles are already included in the codes. You don’t need to build JavaScript to scrape static content, and scraping static content is super easy. You should use BeautifulSoup to scrape the top ten titles in this list, just write this code:‍

Using this code we have given you, you can extract the movie titles from the top ten movies list in IMDb’s top 250 movies list, for this, this code will use the selector you obtained in the first step. This code will then loop continuously through the movies in the list and display the inset of each. You are on the right track if you have output similar to this:

RrmEldjCrbz7V1-o4r6UsKNuWkj_yD2cWwfyuMMbdnRn7l-sEU2n5

Scraping Dynamically Loaded Content with Selenium

As technology progresses, the content on websites is no longer loaded statically, but dynamically. With dynamic loading of content, page performance has increased, user experience has been improved and an additional barrier for scrapers has emerged. Since these sites do not contain HTML dynamic content that can be retrieved with a simple request, scraping the content on these sites is much more difficult than you might think. However, using Selenium, you can simulate content in the browser and scrape data from dynamic content with ease.

If you want to scrape dynamic content using Selenium, you must first know the location of the Chrome driver. The code we have given you below is actually the same code that we scrape static content using BeautifulSoup, but this time you should use Selenium for the request. You should parse the content of the page again using BeautifulSoup as you did before:

Scraping Statically Uploaded Content

Using the code we provided above for dynamic content, you can call the click method on each link and access any movie page you want using the code we have given below:

While it’s fine to use this code that simulates a click on the first movie link, we strongly recommend that you continue to use driver.get instead. If you use driver.get, you will not need to use the click() method to access the other nine movies on the new page and you will be able to do your job more easily. You will lose performance and time by returning to the first page of the listing immediately after accessing the content in the first title in the list, as you will have to click on the content in the second title again this time. Instead of this performance and time-consuming process, it would be more logical to only use the extracted links.

You need to use https://www.imdb.com/title/tt0111161/ as the page link for “The Shawshank Redemption”, a movie that is in the top ten movies list of IMDb’s top 250 movies list. From this page, you should research what year the movie was shot and how long it took. Although you can use both BeautifulSoup and Selenium for this, this time we will prefer Selenium as an example.

In order to find out in which year the movie was shot and how long it took, you need to repeat the first step on the page of the movie you selected. To access this information, use the following code:

for anchor in first10:

Extracting Dynamically Loaded Content by Using Selenium

We can say extracting dynamically loaded content is the next step in web scraping. It is possible to find such static content in the Editorial Lists section of the page of a movie you want. If you examine the page you can find the section here as data-testid and an item with the attribute set to firstListCardGroup-editorial. However, if you look directly at the source of the page, you will not be able to find the attribute firstListCardGroup-editorial anywhere you need it. The reason for this is that the Editorial Lists section prefers to load IMBD dynamically. It is possible to scrape the editorial list of each movie with the sample code we have provided below.‍

How Do You Save Your Obtained Data as CSV and JSON?

Now that you have all the data you want, you should save it in JSON or CSV files for easier reading. To do this, you need to write your content to new files by using the JSON and CSV packages in Python. All you have to do is use the code we have given below:

3 Tips for Web Scraping to Make Your Job Easier

Since almost everything we’ve covered so far is for JavaScript rendering scenarios, we’ll talk a little bit about Selenium in this section. Let’s take a look at three different web scraping tricks that can come in handy and make your job really easy.

1. Put Your Requests In Timeout Breaks

If you send spam consisting of hundreds of different requests to a server in a short time, you may encounter a captcha code at this point and cause your IP address to be blocked. You have to make a permanent code change because you have no option to avoid it for a short time while using Python. You should try to put some timeout intervals between your requests in order to make your traffic to the website look more natural. For this, you will need to use the following code:

2. Add Bugs to Your Code

Since most the websites are dynamic and can change their structure at any time, if you plan to use the same web scraper all the time, you should make use of the bug reporting system. This error injection system can have trial and error syntax. This bug insertion will come in handy when you’re waiting for an item to load, removing it, and requesting it, and your action will look very natural.

3. Add Screenshot Code

If you want to take a screenshot of the web page you want to scrape at any time, you can use the codes below. These codes will also prevent debugging of dynamically loaded content.