Categories: Web scraping, Tutorials

Advanced Web Scraping in Java: A Comprehensive Guide

27 mins read Created Date: September 16, 2024   Updated Date: September 18, 2024

Learn how to master web scraping in Java using tools like JSoup and Selenium, and optimize your projects with Scrape.do for efficient data extraction.

Having web scraping skills has become a must-have tool for developers and data scientists. Extracting information from websites with code allows persons and businesses to collect large amounts of data and analyze them in a short amount of time helping them make better decisions.

In this guide, we will cover different tools and libraries like JSoup, Selenium, and the HttpClient in Java. Also, show you how it works with Scrape.do to make your web scraping projects more efficient by providing solutions to common issues related to handling dynamic content and managing sessions.

Setting Up the Environment

Before you proceed to start scraping the web in Java, also make sure your development environment is properly set up. In this article, we will walk you through setting up your environment (workspace), installing dependencies and checking whether all the tools required for web scraping are available in easy manner.

  1. Installing Java Development Kit (JDK)

Ensure that you have the latest version of the JDK installed. You can download it from theOracle website or use an open-source alternative like OpenJDK.

  1. Setting Up Maven or Gradle

For dependency management, you can use either Maven or Gradle. Below are the steps for setting up Maven:

  • Maven Setup:
    • Install Maven by downloading it from the official website and following the installation instructions.
    • Create a pom.xml file in your project root directory to manage dependencies.
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.example</groupId>
  <artifactId>web-scraping</artifactId>
  <version>1.0-SNAPSHOT</version>


  <dependencies>
    <dependency>
      <groupId>org.jsoup</groupId>
      <artifactId>jsoup</artifactId>
      <version>1.14.3</version>
    </dependency>
    <dependency>
      <groupId>org.seleniumhq.selenium</groupId>
      <artifactId>selenium-java</artifactId>
      <version>3.141.59</version>
    </dependency>
    <dependency>
      <groupId>org.apache.httpcomponents.client5</groupId>
      <artifactId>httpclient5</artifactId>
      <version>5.1</version>
    </dependency>
  </dependencies>
</project>
  • Run the following code to build your Maven project.
mvn clean install

With Maven set up, the required libraries (JSoup, Selenium, and HttpClient) will be automatically downloaded and managed.

  • JSoup: this is a Java library for working with real-world HTML. It provides a convenient API for extracting and manipulating data, using the best of DOM, CSS, and jquery-like methods.
  • Selenium: a portable framework for testing web applications. It provides tools for browser automation, essential for handling dynamic content.
  • HttpClient: library for handling HTTP requests and responses, enabling you to interact with web servers directly.

Setting up Scrape.do

Scrape.do is a very resourceful web scraping service that provides proxies, CAPTCHA solving, and IP rotation, making the scraping process better. Follow the documentationhere for detailed instructions on integration. The steps below will show how to to configure Scrape.do in your Java project using Java’s built-in HttpClient (available since Java 11).

  1. Sign up on the Scrape.do website and get your API key from the dashboard. This API key will be used to authenticate your requests to Scrape.do.
  2. Add the required dependencies, since HttpClient is part of Java 11 and above, no external libraries are required. Ensure your project is using Java 11 or higher.
  3. Configure your API Key,You’ll need to securely store your Scrape.do API key. A common approach is to store it as an environment variable

For example, you can set the environment variable as follows:

export SCRAPE_DO_API_KEY="your_api_key"

In your Java code, retrieve the API key with System.getenv():

String apiKey = System.getenv("SCRAPE_DO_API_KEY");

Once your environment is configured and HttpClient is available, you can set up Scrape.do by sending an HTTP request using the API key.

Here’s how to utilize Scrape.do and make a request with HttpClient in Java:

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.io.IOException;
import java.net.URLEncoder;


public class ScrapeDoSetup {
   public static void main(String[] args) throws IOException, InterruptedException {
       String apiKey = System.getenv("SCRAPE_DO_API_KEY");
       String targetUrl = URLEncoder.encode("https://example.com", "UTF-8");


       // Create HttpClient instance
       HttpClient client = HttpClient.newHttpClient();


       // Build request to Scrape.do
       HttpRequest request = HttpRequest.newBuilder()
           .uri(URI.create("https://api.scrape.do?token=" + apiKey + "&url=" + targetUrl))
           .build();


       // Send request and receive response
       HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());


       // Print response
       System.out.println(response.body());
   }
}

This example shows how to use Scrape.do to make a request, allowing you to scrape content from a target URL. Scrape.do handles the complexities like proxy rotation and CAPTCHA solving.

HTTP Requests and Responses

In web scraping, handling HTTP requests and responses is a fundamental skill. This section will guide you through using Java’s HttpClient to send GET and POST requests, handle and parse JSON responses, and implement error handling and retry mechanisms.

Using Java’s HttpClient

Java’s HttpClient API, introduced in Java 11, is a modern, feature-rich API for sending HTTP requests and receiving responses.

Sending a GET Request:

Here’s a basic example of how to send a GET request:

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.io.IOException;


public class HttpClientExample {
    public static void main(String[] args) throws IOException, InterruptedException {
        HttpClient client = HttpClient.newHttpClient();
        HttpRequest request = HttpRequest.newBuilder()
            .uri(URI.create("https://example.com"))
            .build();


        HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());


        System.out.println(response.body());
    }
}

Sending a POST Request:

For a POST request, you can include request body data:

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.net.http.HttpRequest.BodyPublishers;
import java.io.IOException;


public class HttpClientExample {
    public static void main(String[] args) throws IOException, InterruptedException {
        HttpClient client = HttpClient.newHttpClient();
        HttpRequest request = HttpRequest.newBuilder()
            .uri(URI.create("https://example.com/api"))
            .header("Content-Type", "application/json")
            .POST(BodyPublishers.ofString("{\"key\":\"value\"}"))
            .build();


        HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());


        System.out.println(response.body());
    }
}

Handling and Parsing JSON Responses

Often, web scraping involves dealing with JSON data. Use Java’s JSON libraries, like org.json or com.fasterxml.jackson, to parse JSON responses.

Using org.json:

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import org.json.JSONObject;
import java.io.IOException;


public class JsonParsingExample {
    public static void main(String[] args) throws IOException, InterruptedException {
        HttpClient client = HttpClient.newHttpClient();
        HttpRequest request = HttpRequest.newBuilder()
            .uri(URI.create("https://jsonplaceholder.typicode.com/todos/1"))
            .build();


        HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
        JSONObject jsonResponse = new JSONObject(response.body());


        System.out.println(jsonResponse.getInt("id"));
        System.out.println(jsonResponse.getString("title"));
    }
}

Error Handling and Retry Mechanisms

Implementing error handling and retry mechanisms ensures your scraping process is robust and reliable.

Basic Error Handling:

try {
    HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
    if (response.statusCode() == 200) {
        System.out.println(response.body());
    } else {
        System.err.println("Error: " + response.statusCode());
    }
} catch (IOException | InterruptedException e) {
    e.printStackTrace();
}

Retry Mechanism:

A simple retry mechanism can be implemented using a loop:

int maxRetries = 3;
int retryCount = 0;
boolean success = false;


while (retryCount < maxRetries && !success) {
    try {
        HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
        if (response.statusCode() == 200) {
            System.out.println(response.body());
            success = true;
        } else {
            System.err.println("Error: " + response.statusCode());
            retryCount++;
        }
    } catch (IOException | InterruptedException e) {
        retryCount++;
        e.printStackTrace();
    }


    if (!success && retryCount == maxRetries) {
        System.err.println("Max retries reached. Exiting.");
    }
}

These examples provide a foundation for handling HTTP requests and responses in your Java web scraping projects.

HTML Parsing with JSoup

JSoup is a powerful Java library for working with real-world HTML. It allows you to connect to a URL and parse HTML from a web page through a convenient API for extracting and manipulating data, making it an essential tool for web scraping. It helps with ways to navigate and manipulate the HTML structure, similar to how jQuery works in JavaScript.

Fetching and Parsing HTML Documents

To start using JSoup, add the dependency to your Maven pom.xml or Gradle build.gradle file as outlined in the setup section. Here’s a basic example of fetching and parsing an HTML document:

import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;


import java.io.IOException;


public class JsoupExample {
    public static void main(String[] args) {
        try {
            Document doc = Jsoup.connect("https://example.com").get();
            System.out.println(doc.title());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

Selecting Elements Using CSS Selectors

JSoup supports powerful and flexible CSS selectors to find and extract elements from the HTML.

Selecting Elements:

Document doc = Jsoup.connect("https://example.com").get();
Elements links = doc.select("a[href]"); // Select all links
Elements paragraphs = doc.select("p"); // Select all paragraphs


for (Element link : links) {
    System.out.println(link.attr("href"));
}


for (Element paragraph : paragraphs) {
    System.out.println(paragraph.text());
}

Extracting Data from HTML Elements

You can extract various types of data from HTML elements, such as text, attributes, and HTML itself.

Extracting Text and Attributes:

Element link = doc.select("a").first();
String linkText = link.text(); // Extract link text
String linkHref = link.attr("href"); // Extract link href attribute

System.out.println("Link text: " + linkText);
System.out.println("Link href: " + linkHref);

Handling Different HTML Structures and Edge Cases

Web pages often have complex and inconsistent HTML structures. JSoup provides various methods to handle these scenarios.

Handling Nested Elements:

Elements articles = doc.select("div.article");
for (Element article : articles) {
    String title = article.select("h2.title").text();
    String content = article.select("div.content").text();
    System.out.println("Title: " + title);
    System.out.println("Content: " + content);
}

Handling Missing Elements:

Always check if an element exists before attempting to access it:

Element author = doc.select("div.author").first();
if (author != null) {
    System.out.println("Author: " + author.text());
} else {
    System.out.println("Author information not available");
}

Practical Example

Let’s put it all together with a practical example. This script will scrape article titles and links from a news website:

import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;

import java.io.IOException;

public class NewsScraper {
    public static void main(String[] args) {
        String url = "https://example-site.com";
        try {
            Document doc = Jsoup.connect(url).get();
            Elements articles = doc.select("article");

            for (Element article : articles) {
                String title = article.select("h2.title").text();
                String link = article.select("a").attr("href");
                System.out.println("Title: " + title);
                System.out.println("Link: " + link);
            }
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

We have now covered an overview of using JSoup for HTML parsing in Java web scraping projects. JSoup’s features make it an excellent choice for navigating and extracting data from HTML documents.

Handling Dynamic Content with Selenium

Dynamic content, rendered by JavaScript, can pose a challenge for web scraping. Selenium WebDriver is a very useful tool for browser automation that enables us to handle these dynamic elements.

Selenium WebDriver is quite popular among tools for automating web browsers. It supports various browsers and provides a programmable interface to interact with web pages. This makes it ideal for scraping dynamic content that JSoup cannot handle directly.

Setting Up Selenium with Java

First, add Selenium to your Maven pom.xml file:

Maven Dependency:

<dependency>
    <groupId>org.seleniumhq.selenium</groupId>
    <artifactId>selenium-java</artifactId>
    <version>3.141.59</version>
</dependency>

Next, download the WebDriver binary for your preferred browser (e.g., ChromeDriver for Google Chrome) and add it to your system PATH.

Automating Browsers with Selenium

Here’s an example of how to use Selenium to open a browser, navigate to a page, and extract data:

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import java.util.List;

public class SeleniumExample {
    public static void main(String[] args) {
        // Set the path to the ChromeDriver executable
        System.setProperty("webdriver.chrome.driver", "path/to/chromedriver");

        // Create a new instance of the Chrome driver
        WebDriver driver = new ChromeDriver();

        // Navigate to a webpage
        driver.get("https://example.com");

        // Find elements by CSS selector
        List<WebElement> elements = driver.findElements(By.cssSelector("a[href]"));
        for (WebElement element : elements) {
            System.out.println(element.getAttribute("href"));
        }

        // Close the browser
        driver.quit();
    }
}

Waiting for Elements to Load

Web pages with dynamic content may take time to load. Selenium provides methods to wait for elements to be present before interacting with them.

Explicit Waits:

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;

public class SeleniumWaitExample {
    public static void main(String[] args) {
        System.setProperty("webdriver.chrome.driver", "path/to/chromedriver");
        WebDriver driver = new ChromeDriver();

        driver.get("https://example.com");

        // Wait up to 10 seconds for the element to be present
        WebDriverWait wait = new WebDriverWait(driver, 10);
        WebElement element = wait.until(ExpectedConditions.presenceOfElementLocated(By.id("dynamicElementId")));

        System.out.println(element.getText());

        driver.quit();
    }
}

Extracting Data from Dynamic Pages

Using Selenium, you can handle complex web pages with JavaScript-rendered content. Here’s an example of scraping data from a dynamically loaded table:

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;

import java.util.List;

public class DynamicContentScraper {
    public static void main(String[] args) {
        System.setProperty("webdriver.chrome.driver", "path/to/chromedriver");
        WebDriver driver = new ChromeDriver();

        driver.get("https://example-dynamic-website.com");

        WebDriverWait wait = new WebDriverWait(driver, 10);
        wait.until(ExpectedConditions.presenceOfElementLocated(By.cssSelector("table.dynamic-table")));

        List<WebElement> rows = driver.findElements(By.cssSelector("table.dynamic-table tr"));
        for (WebElement row : rows) {
            List<WebElement> cells = row.findElements(By.tagName("td"));
            for (WebElement cell : cells) {
                System.out.print(cell.getText() + "\t");
            }
            System.out.println();
        }

        driver.quit();
    }
}

Integrating Scrape.do with Selenium

Scrape.do can simplify handling dynamic content by providing proxies, CAPTCHA solving, and IP rotation services. You can configure Scrape.do to work with Selenium by setting up a proxy:

import org.openqa.selenium.Proxy;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.chrome.ChromeOptions;

public class ScrapeDoIntegration {
    public static void main(String[] args) {
        System.setProperty("webdriver.chrome.driver", "path/to/chromedriver");

        Proxy proxy = new Proxy();
        proxy.setHttpProxy("proxy.scrape.do:8080");

        ChromeOptions options = new ChromeOptions();
        options.setProxy(proxy);

        WebDriver driver = new ChromeDriver(options);

        driver.get("https://example.com");

        // Your scraping code here

        driver.quit();
    }
}

Selenium’s ability to handle dynamic content makes it a crucial tool for web scraping complex pages. By integrating Scrape.do, you can improve your scraping skills, making your solutions more efficient and functional.

Managing Sessions and Cookies

Maintaining sessions and managing cookies is crucial for web scraping, especially when dealing with sites that require login or track user interactions. This section will cover techniques for maintaining sessions across multiple requests, managing cookies manually, and using HttpClient’s CookieHandler for session management.

Maintaining Sessions Across Multiple Requests

When scraping websites that require login, you need to maintain the session to ensure that subsequent requests are authenticated. This can be achieved by using Java’s HttpClient and managing cookies.

Using HttpClient to Maintain Sessions:

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.net.CookieManager;
import java.net.CookiePolicy;
import java.net.CookieStore;
import java.net.CookieHandler;
import java.net.http.HttpClient.Builder;
import java.util.List;
import java.net.http.HttpHeaders;
import java.net.http.HttpCookie;

public class SessionManagementExample {
    public static void main(String[] args) throws Exception {
        CookieManager cookieManager = new CookieManager();
        cookieManager.setCookiePolicy(CookiePolicy.ACCEPT_ALL);

        HttpClient client = HttpClient.newBuilder()
                .cookieHandler(cookieManager)
                .build();

        // Login request
        HttpRequest loginRequest = HttpRequest.newBuilder()
                .uri(new URI("https://example.com/login"))
                .header("Content-Type", "application/x-www-form-urlencoded")
                .POST(HttpRequest.BodyPublishers.ofString("username=user&password=pass"))
                .build();

        HttpResponse<String> loginResponse = client.send(loginRequest, HttpResponse.BodyHandlers.ofString());
        System.out.println("Login response: " + loginResponse.body());

        // Subsequent request using the same session
        HttpRequest dataRequest = HttpRequest.newBuilder()
                .uri(new URI("https://example.com/data"))
                .build();

        HttpResponse<String> dataResponse = client.send(dataRequest, HttpResponse.BodyHandlers.ofString());
        System.out.println("Data response: " + dataResponse.body());
    }
}

Managing Cookies Manually

Sometimes you may need to handle cookies manually, especially if you are dealing with complex session management requirements.

Extracting and Using Cookies:

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.net.http.HttpHeaders;
import java.net.http.HttpCookie;
import java.util.List;

public class ManualCookieManagement {
    public static void main(String[] args) throws Exception {
        HttpClient client = HttpClient.newHttpClient();

        // Initial request to get cookies
        HttpRequest initialRequest = HttpRequest.newBuilder()
                .uri(new URI("https://example.com"))
                .build();

        HttpResponse<String> initialResponse = client.send(initialRequest, HttpResponse.BodyHandlers.ofString());
        HttpHeaders headers = initialResponse.headers();
        List<String> setCookieHeaders = headers.allValues("Set-Cookie");

        // Parse cookies
        StringBuilder cookieHeader = new StringBuilder();
        for (String setCookie : setCookieHeaders) {
            List<HttpCookie> cookies = HttpCookie.parse(setCookie);
            for (HttpCookie cookie : cookies) {
                if (cookieHeader.length() > 0) {
                    cookieHeader.append("; ");
                }
                cookieHeader.append(cookie.toString());
            }
        }

        // Subsequent request using cookies
        HttpRequest dataRequest = HttpRequest.newBuilder()
                .uri(new URI("https://example.com/data"))
                .header("Cookie", cookieHeader.toString())
                .build();

        HttpResponse<String> dataResponse = client.send(dataRequest, HttpResponse.BodyHandlers.ofString());
        System.out.println("Data response: " + dataResponse.body());
    }
}

Using HttpClient’s CookieHandler

Java’s HttpClient provides a built-in CookieHandler that simplifies session management by automatically handling cookies for you.

Using CookieHandler:

import java.net.CookieManager;
import java.net.CookiePolicy;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;

public class CookieHandlerExample {
    public static void main(String[] args) throws Exception {
        CookieManager cookieManager = new CookieManager();
        cookieManager.setCookiePolicy(CookiePolicy.ACCEPT_ALL);

        HttpClient client = HttpClient.newBuilder()
                .cookieHandler(cookieManager)
                .build();

        // Initial request to login
        HttpRequest loginRequest = HttpRequest.newBuilder()
                .uri(new URI("https://example.com/login"))
                .header("Content-Type", "application/x-www-form-urlencoded")
                .POST(HttpRequest.BodyPublishers.ofString("username=user&password=pass"))
                .build();

        HttpResponse<String> loginResponse = client.send(loginRequest, HttpResponse.BodyHandlers.ofString());
        System.out.println("Login response: " + loginResponse.body());

        // Subsequent request using the same session
        HttpRequest dataRequest = HttpRequest.newBuilder()
                .uri(new URI("https://example.com/data"))
                .build();

        HttpResponse<String> dataResponse = client.send(dataRequest, HttpResponse.BodyHandlers.ofString());
        System.out.println("Data response: " + dataResponse.body());
    }
}

Managing sessions and cookies is important when scraping sites that require authentication or track user activity. By making the most of Java’s HttpClient and CookieHandler, you can maintain sessions and handle cookies successfully in your web scraping projects.

Handling Captchas and Rate Limiting

Web scraping often involves dealing with captchas and rate limiting measures designed to prevent automated access. This section will cover techniques for bypassing or solving captchas, implementing rate limiting and delays to avoid being blocked, and best practices for mimicking human behavior.

Techniques for Bypassing or Solving Captchas

Captchas are challenges designed to differentiate between human and automated access. While bypassing captchas can be complex and ethically ambiguous, certain services like 2Captchacan help solve them.

Using 2Captcha Service:

2Captcha is an online service that solves captchas. To use it, you need to register and obtain an API key.

Example of Integrating 2Captcha:

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import org.json.JSONObject;

public class CaptchaSolver {
    private static final String API_KEY = "YOUR_2CAPTCHA_API_KEY";

    public static void main(String[] args) throws Exception {
        String siteKey = "SITE_KEY_FROM_WEBSITE";
        String pageUrl = "https://example.com";

        HttpClient client = HttpClient.newHttpClient();

        // Request captcha solution
        HttpRequest requestCaptcha = HttpRequest.newBuilder()
                .uri(URI.create("http://2captcha.com/in.php?key=" + API_KEY + "&method=userrecaptcha&googlekey=" + siteKey + "&pageurl=" + pageUrl))
                .build();
        HttpResponse<String> responseCaptcha = client.send(requestCaptcha, HttpResponse.BodyHandlers.ofString());
        String captchaId = responseCaptcha.body().split("\\|")[1];

        // Wait for captcha to be solved
        Thread.sleep(20000); // Adjust sleep time as needed

        // Get captcha solution
        HttpRequest requestSolution = HttpRequest.newBuilder()
                .uri(URI.create("http://2captcha.com/res.php?key=" + API_KEY + "&action=get&id=" + captchaId))
                .build();
        HttpResponse<String> responseSolution = client.send(requestSolution, HttpResponse.BodyHandlers.ofString());
        String solution = responseSolution.body().split("\\|")[1];

        System.out.println("Captcha Solution: " + solution);

        // Use captcha solution in your request
        HttpRequest dataRequest = HttpRequest.newBuilder()
                .uri(new URI(pageUrl))
                .POST(HttpRequest.BodyPublishers.ofString("g-recaptcha-response=" + solution))
                .build();
        HttpResponse<String> dataResponse = client.send(dataRequest, HttpResponse.BodyHandlers.ofString());
        System.out.println("Data response: " + dataResponse.body());
    }
}

Implementing Rate Limiting and Delays

To avoid getting blocked, it’s crucial to mimic human browsing patterns. This involves implementing rate limiting and delays between requests.

Example of Rate Limiting:

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.time.Duration;

public class RateLimitingExample {
    public static void main(String[] args) throws Exception {
        HttpClient client = HttpClient.newHttpClient();
        String[] urls = {"https://example.com/page1", "https://example.com/page2", "https://example.com/page3"};

        for (String url : urls) {
            HttpRequest request = HttpRequest.newBuilder()
                    .uri(new URI(url))
                    .build();

            HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
            System.out.println("Response from " + url + ": " + response.body());

            // Delay between requests
            Thread.sleep(2000 + (int) (Math.random() * 3000)); // Random delay between 2 and 5 seconds
        }
    }
}

Best Practices for Mimicking Human Behavior

To further reduce the risk of being blocked, follow these best practices:

  • Use User-Agent Rotation: Rotate User-Agent strings to simulate requests from different browsers and devices.
String[] userAgents = {
   "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.6533.90 Safari/537.36",
   "Mozilla/5.0 (Macintosh; Intel Mac OS X 12_6_1) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17 Safari/605.1.15",
   "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.6533.90 Safari/537.36",
   "Mozilla/5.0 (Windows NT 11.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.6613.120 Safari/537.36",
   "Mozilla/5.0 (iPhone; CPU iPhone OS 17_5 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Mobile/15E148 Safari/604.1"
};

int randomIndex = (int) (Math.random() * userAgents.length);
String randomUserAgent = userAgents[randomIndex];

HttpRequest request = HttpRequest.newBuilder()
        .uri(new URI("https://example.com"))
        .header("User-Agent", randomUserAgent)
        .build();
  • Simulate Human Interactions: Use Selenium to simulate mouse movements, clicks, and keyboard inputs.
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.interactions.Actions;

public class HumanSimulationExample {
    public static void main(String[] args) {
        System.setProperty("webdriver.chrome.driver", "path/to/chromedriver");
        WebDriver driver = new ChromeDriver();
        driver.get("https://example.com");

        Actions actions = new Actions(driver);
        actions.moveToElement(driver.findElement(By.id("element-id"))).perform();
        actions.click().perform();
        actions.sendKeys("some text").perform();

        driver.quit();
    }
}
  • Respect Robots.txt: Always check the robots.txt file of the website you’re scraping to ensure you are not violating its policies.
import java.io.IOException;
import java.net.URL;
import java.util.Scanner;

public class RobotsTxtCheck {
    public static void main(String[] args) throws IOException {
        String websiteUrl = "https://example.com";
        String robotsTxtUrl = websiteUrl + "/robots.txt";

        try (Scanner scanner = new Scanner(new URL(robotsTxtUrl).openStream())) {
            while (scanner.hasNextLine()) {
                System.out.println(scanner.nextLine());
            }
        } catch (IOException e) {
            System.err.println("Error reading robots.txt: " + e.getMessage());
        }
    }
}

Data Storage and Processing

Once you have successfully scraped data from websites, the next step is to store and process this data efficiently. This section will cover various methods for storing scraped data in different formats, basic data cleaning and preprocessing techniques, and using libraries like Apache POI for handling Excel files.

Storing Scraped Data

Depending on your requirements, you may store data in CSV, JSON, or databases. Here are examples for each format:

Storing Data in CSV:

import java.io.FileWriter;
import java.io.IOException;
import com.opencsv.CSVWriter;

public class CSVStorage {
    public static void main(String[] args) {
        String[] header = { "Name", "Age", "Country" };
        String[] data1 = { "John Doe", "30", "USA" };
        String[] data2 = { "Jane Smith", "25", "UK" };

        try (CSVWriter writer = new CSVWriter(new FileWriter("data.csv"))) {
            writer.writeNext(header);
            writer.writeNext(data1);
            writer.writeNext(data2);
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

Storing Data in JSON:

import org.json.JSONArray;
import org.json.JSONObject;
import java.io.FileWriter;
import java.io.IOException;

public class JSONStorage {
    public static void main(String[] args) {
        JSONArray jsonArray = new JSONArray();

        JSONObject person1 = new JSONObject();
        person1.put("name", "John Doe");
        person1.put("age", 30);
        person1.put("country", "USA");

        JSONObject person2 = new JSONObject();
        person2.put("name", "Jane Smith");
        person2.put("age", 25);
        person2.put("country", "UK");

        jsonArray.put(person1);
        jsonArray.put(person2);

        try (FileWriter file = new FileWriter("data.json")) {
            file.write(jsonArray.toString(4)); // Indentation for readability
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

Storing Data in a Database:

Using JDBCto store data in a database:

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.SQLException;

public class DatabaseStorage {
    public static void main(String[] args) {
        String url = "jdbc:mysql://localhost:3306/mydatabase";
        String user = "root";
        String password = "password";

        String insertSQL = "INSERT INTO people (name, age, country) VALUES (?, ?, ?)";

        try (Connection conn = DriverManager.getConnection(url, user, password);
             PreparedStatement pstmt = conn.prepareStatement(insertSQL)) {

            pstmt.setString(1, "John Doe");
            pstmt.setInt(2, 30);
            pstmt.setString(3, "USA");
            pstmt.executeUpdate();

            pstmt.setString(1, "Jane Smith");
            pstmt.setInt(2, 25);
            pstmt.setString(3, "UK");
            pstmt.executeUpdate();
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }
}

Basic Data Cleaning and Preprocessing

Before storing data, it’s essential to clean and preprocess it to ensure consistency and quality.

Example of Data Cleaning:

import java.util.ArrayList;
import java.util.List;

public class DataCleaningExample {
    public static void main(String[] args) {
        List<String> rawData = List.of("  John Doe  ", " Jane Smith ", "   ");

        List<String> cleanedData = new ArrayList<>();
        for (String data : rawData) {
            String cleaned = data.trim(); // Remove leading and trailing spaces
            if (!cleaned.isEmpty()) { // Skip empty strings
                cleanedData.add(cleaned);
            }
        }

        cleanedData.forEach(System.out::println);
    }
}

Using Apache POI for Handling Excel Files

Apache POI is a Java library for reading and writing Excel files. Here’s an example of how to use Apache POI to store data in an Excel file:

Adding Dependency for Apache POI:

Maven Dependency

<dependency>
    <groupId>org.apache.poi</groupId>
    <artifactId>poi-ooxml</artifactId>
    <version>5.0.0</version>
</dependency>

Writing Data to an Excel File:

import org.apache.poi.ss.usermodel.*;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import java.io.FileOutputStream;
import java.io.IOException;

public class ExcelStorageExample {
    public static void main(String[] args) {
        Workbook workbook = new XSSFWorkbook();
        Sheet sheet = workbook.createSheet("People");

        String[] header = { "Name", "Age", "Country" };
        String[][] data = {
            { "John Doe", "30", "USA" },
            { "Jane Smith", "25", "UK" }
        };

        Row headerRow = sheet.createRow(0);
        for (int i = 0; i < header.length; i++) {
            Cell cell = headerRow.createCell(i);
            cell.setCellValue(header[i]);
        }

        for (int i = 0; i < data.length; i++) {
            Row row = sheet.createRow(i + 1);
            for (int j = 0; j < data[i].length; j++) {
                Cell cell = row.createCell(j);
                cell.setCellValue(data[i][j]);
            }
        }

        try (FileOutputStream fileOut = new FileOutputStream("people.xlsx")) {
            workbook.write(fileOut);
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

By using the right tools and techniques, you can ensure that your data is clean, consistent, and easily accessible for further analysis.

Best Practices and Ethical Considerations

Web scraping can be a useful tool, but it comes with ethical and legal responsibilities. Let’s cover the best practices for responsible web scraping, including legal and ethical aspects, respecting website terms of service, and techniques for minimizing server load.

Before you begin scraping, it is very important to understand the legal and ethical implications:

  • Compliance with Laws: Different countries have different laws regarding data scraping and privacy. Ensure that your activities comply with local and international laws.
  • Respect for Terms of Service: Many websites have terms of service that prohibit or restrict scraping. Always review and respect these terms.
  • Privacy Considerations: Be mindful of the privacy implications of scraping data, especially when dealing with personal information.

Respecting Website Terms of Service

Websites often have terms of service that outline the acceptable use of their data. Violating these terms can lead to legal consequences and your IP being blocked. Here’s how to check ToS:

Checking Robots.txt:

The robots.txt file provides guidelines for web crawlers about which parts of a website can be accessed. Although it’s not legally binding, respecting it is considered good practice.

import java.io.IOException;
import java.net.URL;
import java.util.Scanner;

public class RobotsTxtChecker {
    public static void main(String[] args) throws IOException {
        String websiteUrl = "https://example.com";
        String robotsTxtUrl = websiteUrl + "/robots.txt";

        try (Scanner scanner = new Scanner(new URL(robotsTxtUrl).openStream())) {
            while (scanner.hasNextLine()) {
                System.out.println(scanner.nextLine());
            }
        } catch (IOException e) {
            System.err.println("Error reading robots.txt: " + e.getMessage());
        }
    }
}

Techniques for Responsible Scraping

To minimize the impact of your scraping activities on the target website, follow these best practices:

  • Rate Limiting: Add delays between requests to avoid overloading the server.
  • Randomized Requests: Make the time between requests to mimic human behavior random.
  • User-Agent Rotation: Rotate User-Agent strings to prevent detection and blocking.
  • IP Rotation: Use different IP addresses to distribute the load and reduce the risk of being blocked.

Example of Responsible Scraping:

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.time.Duration;

public class ResponsibleScraping {
    public static void main(String[] args) throws Exception {
        HttpClient client = HttpClient.newHttpClient();
        String[] urls = {"https://example.com/page1", "https://example.com/page2", "https://example.com/page3"};
        String[] userAgents = {
            "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3",
            "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36"
        };

        for (String url : urls) {
            int randomIndex = (int) (Math.random() * userAgents.length);
            String randomUserAgent = userAgents[randomIndex];

            HttpRequest request = HttpRequest.newBuilder()
                    .uri(new URI(url))
                    .header("User-Agent", randomUserAgent)
                    .build();

            HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
            System.out.println("Response from " + url + ": " + response.body());

            // Random delay between requests
            Thread.sleep(2000 + (int) (Math.random() * 3000));
        }
    }
}

Techniques for Minimizing Server Load

To ensure that your scraping activities do not negatively impact the target website, implement techniques to minimize server load:

  • Fetch only necessary data: Avoid fetching unnecessary resources like images, scripts, and stylesheets.
  • Cache Data: Cache frequently accessed data to reduce repeated requests.
  • Efficient Data Extraction: Extract data efficiently by minimizing the number of requests and using batch processing where possible.

Example of Fetching Only Necessary Data:

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;

public class EfficientDataExtraction {
    public static void main(String[] args) throws Exception {
        HttpClient client = HttpClient.newHttpClient();
        HttpRequest request = HttpRequest.newBuilder()
                .uri(new URI("https://example.com"))
                .header("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3")
                .header("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8")
                .header("Accept-Encoding", "gzip, deflate, br")
                .header("Connection", "keep-alive")
                .build();

        HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
        System.out.println("Response: " + response.body());
    }
}

Troubleshooting and Debugging

Web scraping can be complex, and various issues may arise during development and execution. This section will cover common issues and their solutions, debugging techniques for Java web scraping code, and tools and libraries for monitoring and logging.

Common Issues and Their Solutions

Here are some common issues you might encounter while web scraping and their potential solutions:

  • Issue: HTTP 403 Forbidden
    • Solution: This error often occurs when your request is blocked by the server. To solve this, ensure you are using an appropriate User-Agent header and mimic browser behavior.
  • Issue: HTTP 429 Too Many Requests
    • Solution: This indicates that you are being rate-limited. Implement delays and randomize the intervals between requests to avoid this.
  • Issue: CAPTCHA Challenges
    • Solution: Use CAPTCHA solving services like 2Captcha, or avoid scraping sites that rely heavily on CAPTCHAs.
  • Issue: Dynamic Content Not Loading
    • Solution: Use Selenium WebDriver to handle JavaScript-rendered content and wait for elements to load.

Debugging Techniques for Java Web Scraping Code

Proper debugging can save time and help identify issues quickly. Here are some techniques for debugging web scraping code in Java:

  • **Log HTTP Requests and Responses:**Log the details of HTTP requests and responses to understand what data is being sent and received.

        import java.net.URI;
        import java.net.http.HttpClient;
        import java.net.http.HttpRequest;
        import java.net.http.HttpResponse;
        import java.util.logging.Level;
        import java.util.logging.Logger;
    
        public class HttpLogger {
            private static final Logger logger = Logger.getLogger(HttpLogger.class.getName());
    
            public static void main(String[] args) throws Exception {
                HttpClient client = HttpClient.newHttpClient();
                HttpRequest request = HttpRequest.newBuilder()
                        .uri(new URI("https://example.com"))
                        .build();
    
                HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
    
                logger.log(Level.INFO, "Request URI: " + request.uri());
                logger.log(Level.INFO, "Response Status Code: " + response.statusCode());
                logger.log(Level.INFO, "Response Body: " + response.body());
            }
        }
    
        

    **Use Breakpoints and Debugger:**Utilize the debugger in your IDE (e.g., IntelliJ IDEA, Eclipse) to set breakpoints and step through your code to inspect variables and understand the flow.

  • **Handle Exceptions Gracefully:**Implement robust exception handling to catch and log errors without crashing your program.

        try {
            HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
            System.out.println(response.body());
        } catch (IOException | InterruptedException e) {
            e.printStackTrace();
            logger.log(Level.SEVERE, "Error occurred: " + e.getMessage());
        }
    
        

    Tools and Libraries for Monitoring and Logging

    Using the right tools and libraries can help you monitor and log your web scraping activities effectively.

     

    Log4j for Logging:Apache Log4j is a popular logging library for Java. Add the dependency to your project and configure it for detailed logging.

    Maven Dependency:

    <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-core</artifactId>
            <version>2.14.1</version>
        </dependency>
    
        

    Log4j Configuration (log4j2.xml):

    <Configuration status="WARN">
            <Appenders>
                <Console name="Console" target="SYSTEM_OUT">
                    <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
                </Console>
                <File name="File" fileName="logs/app.log">
                    <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
                </File>
            </Appenders>
            <Loggers>
                <Root level="info">
                    <AppenderRef ref="Console" />
                    <AppenderRef ref="File" />
                </Root>
            </Loggers>
        </Configuration>
    
        

    Wireshark for Network Monitoring:Wiresharkis a network protocol analyzer that helps monitor the traffic between your scraper and the target website. This can be useful for debugging issues related to network requests.

  • Fiddler for HTTP Debugging:Fiddleris a web debugging proxy that logs all HTTP(S) traffic between your computer and the internet. It can be used to inspect and debug HTTP requests and responses.

By employing these troubleshooting and debugging techniques, you can identify and resolve issues more effectively, ensuring the credibility of your web scraping scripts.

Advanced Tips and Tricks

To take your web scraping skills to the next level, consider implementing advanced techniques such as using headless browsers for faster scraping, parallel scraping with multi-threading, and optimizing performance and memory usage. This section will cover these advanced tips and tricks to enhance your web scraping projects.

Using Headless Browsers for Faster Scraping

Headless browsers are web browsers without a graphical user interface. They allow you to perform web scraping tasks more efficiently by eliminating the overhead of rendering web pages.

Using Headless Chrome with Selenium:

To use Chrome in headless mode with Selenium, configure the ChromeOptions:

import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.chrome.ChromeOptions;

public class HeadlessBrowserExample {
    public static void main(String[] args) {
        System.setProperty("webdriver.chrome.driver", "path/to/chromedriver");

        ChromeOptions options = new ChromeOptions();
        options.addArguments("--headless");
        options.addArguments("--disable-gpu");
        options.addArguments("--window-size=1920,1080");

        WebDriver driver = new ChromeDriver(options);
        driver.get("https://example.com");

        System.out.println("Title: " + driver.getTitle());
        driver.quit();
    }
}

Implementing Parallel Scraping with Multi-Threading

Parallel scraping can significantly speed up your data extraction process by performing multiple requests concurrently. Java’s ExecutorService can help manage multiple threads efficiently.

Example of Parallel Scraping:

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;

public class ParallelScrapingExample {
    public static void main(String[] args) throws Exception {
        List<String> urls = Arrays.asList(
                "https://example.com/page1",
                "https://example.com/page2",
                "https://example.com/page3"
        );

        ExecutorService executor = Executors.newFixedThreadPool(3);
        HttpClient client = HttpClient.newHttpClient();

        for (String url : urls) {
            executor.submit(() -> {
                try {
                    HttpRequest request = HttpRequest.newBuilder()
                            .uri(new URI(url))
                            .build();
                    HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
                    System.out.println("Response from " + url + ": " + response.body());
                } catch (Exception e) {
                    e.printStackTrace();
                }
            });
        }

        executor.shutdown();
    }
}

Optimizing Performance and Memory Usage

Efficient performance and memory management are crucial for large-scale web scraping projects. Here are some tips for optimizing your scraping scripts:

**Use Streams for Efficient Data Processing:**Java 8 introduced the Stream API, which provides a high-level abstraction for processing sequences of elements. Use streams to process data more efficiently.

import java.util.Arrays;
import java.util.List;

public class StreamExample {
    public static void main(String[] args) {
        List<String> data = Arrays.asList("apple", "banana", "cherry");

        data.stream()
            .filter(item -> item.startsWith("a"))
            .forEach(System.out::println);
    }
}
  • **Avoid Unnecessary Object Creation:**Reuse objects where possible to reduce memory consumption and garbage collection overhead.
  • **Use Memory-Efficient Data Structures:**Choose appropriate data structures that minimize memory usage. For example, use ArrayList instead of LinkedList when random access is required.
  • **Profile and Monitor Performance:**Use profiling tools like VisualVM or YourKit to monitor and optimize the performance of your web scraping scripts.

Example of Using VisualVM:

  1. Download and install VisualVM from visualvm.github.io.
  2. Start your Java application with VisualVM connected.
  3. Use VisualVM to analyze memory usage, CPU usage, and identify performance bottlenecks.

Practical Example: Optimized Web Scraping Script

Here’s an optimized web scraping script that incorporates headless browsing, parallel scraping, and efficient data processing:

import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.chrome.ChromeOptions;

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;

public class OptimizedScrapingExample {
    public static void main(String[] args) throws Exception {
        System.setProperty("webdriver.chrome.driver", "path/to/chromedriver");

        // Headless browser setup
        ChromeOptions options = new ChromeOptions();
        options.addArguments("--headless");
        options.addArguments("--disable-gpu");
        options.addArguments("--window-size=1920,1080");

        WebDriver driver = new ChromeDriver(options);
        driver.get("https://example.com");
        System.out.println("Title: " + driver.getTitle());
        driver.quit();

        // Parallel scraping setup
        List<String> urls = Arrays.asList(
                "https://example.com/page1",
                "https://example.com/page2",
                "https://example.com/page3"
        );

        ExecutorService executor = Executors.newFixedThreadPool(3);
        HttpClient client = HttpClient.newHttpClient();

        for (String url : urls) {
            executor.submit(() -> {
                try {
                    HttpRequest request = HttpRequest.newBuilder()
                            .uri(new URI(url))
                            .build();
                    HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
                    System.out.println("Response from " + url + ": " + response.body());
                } catch (Exception e) {
                    e.printStackTrace();
                }
            });
        }

        executor.shutdown();
    }
}
  1. Imports: Required libraries for Selenium, HTTP Client, and concurrency are imported.
  2. Headless Browser: Configures and runs Chrome in headless mode to fetch the page title from example.com.
  3. Parallel Scraping: Sets up a thread pool and uses HttpClient to scrape multiple URLs concurrently.
  4. Execution: Submits tasks to the thread pool, each task makes an HTTP request and prints the response, then shuts down the executor service.

Applying these advanced tips and tricks, enables you to create efficient, high-performance web scraping solutions that handle complex requirements and large-scale data extraction tasks.

Conclusion

Web scraping in Java offers powerful capabilities for extracting data from websites, but it requires a solid understanding of advanced techniques and best practices. Throughout this article, we’ve covered various aspects of web scraping, including setting up the environment, handling HTTP requests and responses, parsing HTML with JSoup, dealing with dynamic content using Selenium, managing sessions and cookies, handling captchas and rate limiting, storing and processing data, and following best practices for ethical scraping.

Experiment with different approaches, libraries, and services to find the most efficient and robust solutions for your specific needs.

Remember that the services that Scrape.do offer can significantly simplify your web scraping efforts by providing proxy management, CAPTCHA solving, and IP rotation. Integrate Scrape.do into your Java projects to enhance reliability and efficiency.

By following the guidelines and best practices outlined in this article, you can build powerful and ethical web scraping solutions that meet your data extraction needs. Happy scraping!

GitHub: https://github.com/ChrisRoland/WebScraping-in-Java-Code-Repo

 


Dumebi Okolo

Author: Dumebi Okolo


I am a passionate and versatile technical writer with a strong background in creating high-quality documentation and SEO-optimized content for diverse industries. With experience in both full-time and contract roles, I excel in transforming complex technical information into clear, accessible, and engaging content.

Table of Contents