Categories: Web scraping, Tutorials

Advanced Guide to Web Scraping with Goutte in PHP

19 mins read Created Date: December 05, 2024   Updated Date: December 05, 2024

Goutte is a lightweight PHP library based on Symfony components, specifically designed for web scraping.

It combines Symfony’s DomCrawler and HttpClient components, making it ideal for straightforward scraping tasks while offering robust functionality for advanced projects.

Goutte’s simplicity and efficient DOM traversal using CSS selectors make it an excellent choice for web scraping in PHP, especially for developers looking for integration with Symfony’s ecosystem.

Important Migration Notice

As of April 1, 2023, Goutte has been officially archived and deprecated. The official statement from the Goutte repository states:

WARNING: This library is deprecated. As of v4, Goutte became a simple proxy to the HttpBrowser class from the Symfony BrowserKit component. To migrate, replace Goutte\Client by Symfony\Component\BrowserKit\HttpBrowser in your code.

This means that instead of using the Goutte library:

# Don't use this anymore
composer require fabpot/goutte

You should now use Symfony’s components directly:

composer require symfony/browser-kit
composer require symfony/http-client
composer require symfony/dom-crawler
composer require symfony/css-selector

The good news is that all the functionality that made Goutte popular is available directly through these Symfony components, with the added benefits of:

  • Active maintenance by the Symfony community
  • Regular security updates
  • Better integration with modern PHP practices
  • Direct access to Symfony’s ecosystem
  • More frequent feature updates

Installation & Setup

Web scraping in PHP traditionally relied on Goutte, which was easily installed via Composer along with its dependencies for web scraping and DOM parsing. However, since Goutte has been deprecated, we now use Symfony components directly, which provide the same functionality with active maintenance and support.

To get started with web scraping using Symfony components, you’ll need to install several packages via Composer that work together to provide the complete scraping toolkit:

composer require symfony/browser-kit
composer require symfony/http-client
composer require symfony/dom-crawler
composer require symfony/css-selector

Dependencies Explained:

  • symfony/browser-kit: This component simulates a web browser, providing the core functionality that Goutte used to offer. It handles navigation, form submission, and cookie management.
  • symfony/http-client: A powerful HTTP client that handles the actual web requests, supporting features like redirects, timeouts, and proxy configuration.
  • symfony/dom-crawler: The backbone of content extraction, providing CSS selector-based DOM traversal - this is what allows you to easily extract data from HTML pages.
  • symfony/css-selector: Enables CSS selector support for the DOM crawler, making it possible to use familiar CSS-style selectors to find elements in the page.

After installing these components, you can verify the setup with a basic script. Here’s a simple example that demonstrates the fundamental concepts:

<?php
require 'vendor/autoload.php';

use Symfony\Component\BrowserKit\HttpBrowser;
use Symfony\Component\HttpClient\HttpClient;

// Create a browser instance with some basic configuration
$browser = new HttpBrowser(
    HttpClient::create([
        'headers' => [
            'User-Agent' => 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/91.0.4472.124',
        ]
    ])
);

// Make a request and get the crawler
$crawler = $browser->request('GET', 'https://www.scrapethissite.com/pages/simple/');

// Extract the page title using CSS selectors
echo $crawler->filter('title')->text();

Basic Web Scraping with Goutte

Before we start scraping, let’s ensure we have the correct setup:

Project Setup

First, create a new directory for your project and initialize it:

mkdir web-scraping-project
cd web-scraping-project
composer init

Install Required Dependencies

Install all necessary Symfony components:

composer require symfony/browser-kit
composer require symfony/http-client
composer require symfony/dom-crawler
composer require symfony/css-selector

Verify Your composer.json

Make sure your composer.json includes these dependencies:

{
    "require": {
        "symfony/browser-kit": "^6.0",
        "symfony/http-client": "^6.0",
        "symfony/dom-crawler": "^6.0",
        "symfony/css-selector": "^6.0"
    }
}

Now, let’s create our first scraper. We’ll use Scrape This Site’s Countries Page as an example. This page contains a list of countries with their capitals, population, and area information.

<?php
require 'vendor/autoload.php';

use Symfony\Component\BrowserKit\HttpBrowser;
use Symfony\Component\HttpClient\HttpClient;
use Symfony\Component\DomCrawler\Crawler;

class CountryScraper {
    private HttpBrowser $browser;

    public function __construct() {
        $this->browser = new HttpBrowser(
            HttpClient::create([
                'headers' => [
                    'User-Agent' => 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/91.0.4472.124',
                ],
                'verify_peer' => true,
                'timeout' => 30
            ])
        );
    }

    public function scrapeCountries(): array {
        try {
            $url = 'https://www.scrapethissite.com/pages/simple/';
            $crawler = $this->browser->request('GET', $url);

            if ($this->browser->getResponse()->getStatusCode() !== 200) {
                throw new \RuntimeException('Failed to load the page');
            }

            return $crawler->filter('.country')->each(function (Crawler $node) {
                return [
                    'name' => $this->cleanText($node->filter('.country-name')->text()),
                    'capital' => $this->cleanText($node->filter('.country-capital')->text()),
                    'population' => $this->parsePopulation($node->filter('.country-population')->text()),
                    'area' => $this->parseArea($node->filter('.country-area')->text())
                ];
            });
        } catch (\Exception $e) {
            throw new \RuntimeException('Error scraping countries: ' . $e->getMessage());
        }
    }

    private function cleanText(string $text): string {
        $decoded = html_entity_decode($text, ENT_QUOTES | ENT_HTML5, 'UTF-8');
        return trim(preg_replace('/\s+/', ' ', $decoded));
    }

    private function parsePopulation(string $population): int {
        return (int) preg_replace('/[^0-9]/', '', $population);
    }

    private function parseArea(string $area): float {
        return (float) preg_replace('/[^0-9.]/', '', $area);
    }
}

// Main execution block
try {
    echo "Starting web scraping...\n\n";

    $scraper = new CountryScraper();
    $countries = $scraper->scrapeCountries();

    // Print the first 5 countries
    foreach (array_slice($countries, 0, 5) as $country) {
        echo "Country: {$country['name']}\n";
        echo "Capital: {$country['capital']}\n";
        echo "Population: " . number_format($country['population']) . "\n";
        echo "Area: " . number_format($country['area'], 2) . " km²\n";
        echo "----------------------------------------\n";
    }

    echo "\nTotal countries scraped: " . count($countries) . "\n";

} catch (\Exception $e) {
    echo "Error: " . $e->getMessage() . "\n";
    exit(1);
}

Advanced CSS Selectors

When scraping websites, understanding CSS selectors is crucial for precise data extraction. Symfony’s DomCrawler component supports a wide range of CSS selectors that help you target specific elements accurately.

Basic Selectors

// Element selector
$crawler->filter('p');               // Select all <p> elements
$crawler->filter('div');             // Select all <div> elements

// Class selector
$crawler->filter('.product');        // Select elements with class "product"
$crawler->filter('div.product');     // Select <div> elements with class "product"

// ID selector
$crawler->filter('#main');           // Select element with ID "main"
$crawler->filter('div#main');        // Select <div> element with ID "main"

// Multiple selectors
$crawler->filter('div, span');       // Select all <div> and <span> elements
$crawler->filter('.product, .item'); // Select elements with class "product" or "item"

Attribute Selectors

// Basic attribute selector
$crawler->filter('[data-id]');              // Elements with data-id attribute
$crawler->filter('div[data-id]');           // <div> elements with data-id attribute

// Exact match
$crawler->filter('[data-type="product"]');  // Elements where data-type="product"
$crawler->filter('a[target="_blank"]');     // Links that open in new tab

// Contains word
$crawler->filter('[class~="product"]');     // Elements with class containing "product" as a word

// Starts with
$crawler->filter('[href^="https"]');        // Links starting with https
$crawler->filter('[class^="product-"]');    // Elements with class starting with "product-"

// Ends with
$crawler->filter('[href$=".pdf"]');         // Links ending with .pdf
$crawler->filter('[class$="-item"]');       // Elements with class ending with "-item"

// Contains
$crawler->filter('[href*="product"]');      // Links containing "product"
$crawler->filter('[class*="product"]');     // Elements with class containing "product"

// Multiple attributes
$crawler->filter('[data-type="product"][data-category="electronics"]');

Hierarchical Selectors

// Descendant selector (any level)
$crawler->filter('div .product');           // Any .product elements inside <div>
$crawler->filter('#content .title');        // Any .title elements inside #content

// Child selector (direct descendant)
$crawler->filter('div > .product');         // .product elements that are direct children of <div>
$crawler->filter('ul > li');                // <li> elements that are direct children of <ul>

// Adjacent sibling
$crawler->filter('h2 + p');                 // <p> elements directly after <h2>
$crawler->filter('.product + .description'); // .description elements directly after .product

// General sibling
$crawler->filter('h2 ~ p');                 // All <p> elements after <h2>

Pseudo-class Selectors

// Position-based
$crawler->filter('li:first-child');         // First <li> element in its parent
$crawler->filter('li:last-child');          // Last <li> element in its parent
$crawler->filter('li:nth-child(2)');        // Second <li> element
$crawler->filter('li:nth-child(odd)');      // Odd-numbered <li> elements
$crawler->filter('li:nth-child(even)');     // Even-numbered <li> elements
$crawler->filter('li:nth-child(3n)');       // Every third <li> element

// State-based
$crawler->filter('input:checked');          // Checked input elements
$crawler->filter('input:disabled');         // Disabled input elements
$crawler->filter('input:enabled');          // Enabled input elements

// Content-based
$crawler->filter('p:empty');                // Empty <p> elements
$crawler->filter('p:not(.exclude)');        // <p> elements without class "exclude"

Practical Examples

Basic Selectors with Countries Page
class BasicScraper {
    private HttpBrowser $browser;

    // Basic element selection
    $crawler->filter('.country');              // Select all country containers
    $crawler->filter('h3.country-name');       // Select country names
    $crawler->filter('span.country-capital');  // Select capital cities

    public function demonstrateBasicSelectors(): void {
        $crawler = $this->browser->request('GET', 'https://www.scrapethissite.com/pages/simple/');

        // Get all country names
        $names = $crawler->filter('.country-name')->each(function ($node) {
            return $node->text();
        });

        // Get countries with population over 1 million
        $bigCountries = $crawler->filter('.country')->reduce(function ($node) {
            $population = (int) preg_replace('/[^0-9]/', '', $node->filter('.country-population')->text());
            return $population > 1000000;
        });
    }
}
Advanced Selectors with Hockey Teams
class HockeyScraper {
    public function scrapeHockeyData(): array {
        $crawler = $this->browser->request('GET', 'https://www.scrapethissite.com/pages/forms/');

        // Multiple attribute selectors
        $winners = $crawler->filter('.team[data-wins][data-year]');

        // Teams from specific year
        $teams1990 = $crawler->filter('.team[data-year="1990"]');

        // Teams with high wins
        $goodTeams = $crawler->filter('.team[data-wins > "40"]');

        return $crawler->filter('.team')->each(function (Crawler $node) {
            return [
                'name' => $node->filter('.team-name')->text(),
                'year' => $node->filter('.team-year')->text(),
                'wins' => $node->filter('.team-wins')->text(),
                'losses' => $node->filter('.team-losses')->text()
            ];
        });
    }

    public function findChampionshipTeams(): array {
        $crawler = $this->browser->request('GET', 'https://www.scrapethissite.com/pages/forms/');

        // Complex selector combining multiple conditions
        return $crawler->filter('.team[data-wins > "50"][data-year < "2000"]')->each(function ($node) {
            return $node->filter('.team-name')->text();
        });
    }
}
Hierarchical Selectors with Oscar Winners
class OscarsScraper {
    public function scrapeOscarWinners(): array {
        $crawler = $this->browser->request('GET', 'https://www.scrapethissite.com/pages/academy-awards/');

        // Parent-child relationships
        $movies = $crawler->filter('.film-wrapper > .film');

        // Adjacent siblings
        $movieDetails = $crawler->filter('.film-title + .film-year');

        return $crawler->filter('.film')->each(function (Crawler $node) {
            return [
                'title' => $node->filter('.film-title')->text(),
                'year' => $node->filter('.film-year')->text(),
                'nominations' => $node->filter('.film-nominations')->text(),
                'awards' => $node->filter('.film-awards')->text()
            ];
        });
    }
}
Dynamic Filtering with NHL Games
class NHLGamesScraper {
    public function scrapeGames(): array {
        $url = 'https://www.scrapethissite.com/pages/ajax-javascript/?ajax=true&year=2015';
        $crawler = $this->browser->request('GET', $url);

        // Parse JSON response
        $response = json_decode($crawler->text(), true);

        return $response['games'] ?? [];
    }
}
Practical Examples Using Multiple Pages
class ComprehensiveScraper {
    private HttpBrowser $browser;

    public function __construct() {
        $this->browser = new HttpBrowser(
            HttpClient::create([
                'headers' => [
                    'User-Agent' => 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/91.0.4472.124',
                ],
                'verify_peer' => true,
                'timeout' => 30
            ])
        );
    }

    /**
     * Scrape countries by continent
     */
    public function scrapeCountriesByContinent(string $continent): array {
        $url = 'https://www.scrapethissite.com/pages/simple/';
        $crawler = $this->browser->request('GET', $url);

        // Use attribute contains selector
        return $crawler->filter(sprintf('.country[data-continent="%s"]', $continent))
            ->each(function (Crawler $node) {
                return [
                    'name' => $this->cleanText($node->filter('.country-name')->text()),
                    'capital' => $this->cleanText($node->filter('.country-capital')->text())
                ];
            });
    }

    /**
     * Find successful hockey teams
     */
    public function findSuccessfulTeams(int $minWins): array {
        $url = 'https://www.scrapethissite.com/pages/forms/';
        $crawler = $this->browser->request('GET', $url);

        // Combine attribute selectors with numerical comparison
        return $crawler->filter(sprintf('.team[data-wins > "%d"]', $minWins))
            ->each(function (Crawler $node) {
                return [
                    'name' => $node->filter('.team-name')->text(),
                    'wins' => $node->filter('.team-wins')->text()
                ];
            });
    }

    /**
     * Find Oscar winners by year range
     */
    public function getOscarWinnersByYearRange(int $startYear, int $endYear): array {
        $url = 'https://www.scrapethissite.com/pages/academy-awards/';
        $crawler = $this->browser->request('GET', $url);

        return $crawler->filter('.film')->reduce(function (Crawler $node) use ($startYear, $endYear) {
            $year = (int) $node->filter('.film-year')->text();
            return $year >= $startYear && $year <= $endYear;
        })->each(function (Crawler $node) {
            return [
                'title' => $node->filter('.film-title')->text(),
                'year' => (int) $node->filter('.film-year')->text(),
                'awards' => (int) $node->filter('.film-awards')->text()
            ];
        });
    }

    private function cleanText(string $text): string {
        return trim(preg_replace('/\s+/', ' ', $text));
    }
}

// Usage Examples
try {
    $scraper = new ComprehensiveScraper();

    // Get European countries
    $europeanCountries = $scraper->scrapeCountriesByContinent('EU');
    echo "Found " . count($europeanCountries) . " European countries\n";

    // Find teams with more than 45 wins
    $successfulTeams = $scraper->findSuccessfulTeams(45);
    echo "Found " . count($successfulTeams) . " successful teams\n";

    // Get Oscar winners from 1990-2000
    $oscarWinners = $scraper->getOscarWinnersByYearRange(1990, 2000);
    echo "Found " . count($oscarWinners) . " Oscar winners\n";

} catch (\Exception $e) {
    echo "Error: " . $e->getMessage() . "\n";
}

Handling Forms and Interactions

Web scraping often involves interacting with forms to retrieve specific data. The Symfony BrowserKit component provides powerful tools for handling form submissions, including complex multi-step forms and various input types.

The simplest form interaction involves finding a form, filling in fields, and submitting:

// Select and fill a form
$form = $crawler->filter('#hockey-form')->form();
$form['year'] = '1990';
$form['team_name'] = 'Rangers';
$crawler = $client->submit($form);

Let’s explore a complete example using the NHL Teams statistics page.

<?php
require 'vendor/autoload.php';

use Symfony\Component\BrowserKit\HttpBrowser;
use Symfony\Component\HttpClient\HttpClient;
use Symfony\Component\DomCrawler\Crawler;
use Symfony\Component\DomCrawler\Form;

class HockeyTeamScraper {
    private HttpBrowser $browser;

    public function __construct() {
        $this->browser = new HttpBrowser(
            HttpClient::create([
                'headers' => [
                    'User-Agent' => 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/91.0.4472.124',
                    'Accept' => 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
                    'Accept-Language' => 'en-US,en;q=0.5',
                ],
                'verify_peer' => true,
                'timeout' => 30
            ])
        );
    }

    public function searchTeams(string $query = ''): array {
        try {
            $url = 'https://www.scrapethissite.com/pages/forms/';
            if ($query) {
                $url .= '?q=' . urlencode($query);
            }

            $crawler = $this->browser->request('GET', $url);

            if ($this->browser->getResponse()->getStatusCode() !== 200) {
                throw new \RuntimeException('Failed to load the search page');
            }

            return $this->extractTeamData($crawler);

        } catch (\Exception $e) {
            throw new \RuntimeException('Failed to search teams: ' . $e->getMessage());
        }
    }

    private function extractTeamData(Crawler $crawler): array {
        return $crawler->filter('tr.team')->each(function (Crawler $node) {
            try {
                return [
                    'name' => $this->cleanText($node->filter('.name')->text()),
                    'year' => (int) $node->filter('.year')->text(),
                    'wins' => (int) $node->filter('.wins')->text(),
                    'losses' => (int) $node->filter('.losses')->text(),
                    'ot_losses' => $this->cleanText($node->filter('.ot-losses')->text()),
                    'win_percentage' => $this->parsePercentage($node->filter('.pct')->text()),
                    'goals_for' => (int) $node->filter('.gf')->text(),
                    'goals_against' => (int) $node->filter('.ga')->text(),
                    'goal_difference' => (int) $node->filter('.diff')->text(),
                ];
            } catch (\Exception $e) {
                return null;
            }
        });
    }

    private function cleanText(string $text): string {
        return trim(preg_replace('/\s+/', ' ', html_entity_decode($text)));
    }

    private function parsePercentage(string $percentage): float {
        return (float) str_replace('%', '', trim($percentage));
    }

    public function getTeamsByYear(int $year): array {
        $allTeams = $this->searchTeams();
        return array_filter($allTeams, function($team) use ($year) {
            return $team['year'] === $year;
        });
    }

    public function getSuccessfulTeams(int $minWins = 40): array {
        $allTeams = $this->searchTeams();
        return array_filter($allTeams, function($team) use ($minWins) {
            return $team['wins'] >= $minWins;
        });
    }
}

// Usage Examples
try {
    $scraper = new HockeyTeamScraper();

    // Search for specific teams
    $searchResults = $scraper->searchTeams('Bruins');

    // Get teams from 1990
    $teams1990 = $scraper->getTeamsByYear(1990);

    // Get successful teams (40+ wins)
    $successfulTeams = $scraper->getSuccessfulTeams(40);

    // Print results
    echo "\nTeams from 1990 (" . count($teams1990) . " teams):\n";
    foreach ($teams1990 as $team) {
        echo sprintf(
            "%s: %d wins, %d losses, %.3f%% win rate, GF: %d, GA: %d, DIFF: %d\n",
            $team['name'],
            $team['wins'],
            $team['losses'],
            $team['win_percentage'],
            $team['goals_for'],
            $team['goals_against'],
            $team['goal_difference']
        );
    }

    echo "\nSuccessful Teams (40+ wins):\n";
    foreach ($successfulTeams as $team) {
        echo sprintf(
            "%s (%d): %d wins\n",
            $team['name'],
            $team['year'],
            $team['wins']
        );
    }

} catch (\Exception $e) {
    echo "Error: " . $e->getMessage() . "\n";
}

Advanced Web Scraping with Goutte in PHP

Now that we’ve covered the basics of web scraping with Goutte, it’s time to take a look at more advanced concepts, starting with making your bot appear as similar to a real browser as possible.

Handling Cookies, Headers, and Sessions

When scraping websites with authentication requirements, managing cookies, headers, and sessions is crucial. This is especially important for websites that require login or maintain state across multiple requests.

Here’s how to handle these aspects using Symfony’s BrowserKit components.

// Set custom headers
$browser->setServerParameter('HTTP_USER_AGENT', 'Mozilla/5.0');
$browser->setServerParameter('HTTP_ACCEPT', 'text/html,application/xhtml+xml,application/xml');

// Set a cookie manually
$cookie = new Cookie('session_cookie', 'cookie_value', strtotime('+1 hour'));
$browser->getCookieJar()->set($cookie);

// Make request with cookies and headers
$crawler = $browser->request('GET', 'https://example.com');

Comprehensive Example: Login Form with Session Management

Here’s a complete example using ScrapingCourse Login Page[^2] that demonstrates handling login forms, CSRF tokens, cookies, and session management:

<?php
require 'vendor/autoload.php';

use Symfony\Component\BrowserKit\HttpBrowser;
use Symfony\Component\HttpClient\HttpClient;
use Symfony\Component\DomCrawler\Crawler;
use Symfony\Component\BrowserKit\Cookie;

class LoginScraper {
    private HttpBrowser $browser;
    private string $baseUrl = 'https://www.scrapingcourse.com';

    public function __construct() {
        // Initialize browser with common headers
        $this->browser = new HttpBrowser(
            HttpClient::create([
                'headers' => [
                    'User-Agent' => 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/91.0.4472.124',
                    'Accept' => 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
                    'Accept-Language' => 'en-US,en;q=0.5',
                    'Origin' => $this->baseUrl,
                    'Referer' => $this->baseUrl . '/login',
                ]
            ])
        );
    }

    /**
     * Handle login process with session management
     */
    public function login(string $email = '[email protected]', string $password = 'password'): bool {
        try {
            // Step 1: Get the login page and CSRF token
            $crawler = $this->browser->request('GET', $this->baseUrl . '/login');
            $token = $this->getCSRFToken($crawler);

            // Step 2: Prepare and submit the login form
            $form = $crawler->filter('#login-form')->form();
            $form['email'] = $email;
            $form['password'] = $password;

            // Submit form and get response
            $response = $this->browser->submit($form);

            // Step 3: Verify login and check protected content
            return $this->verifyLogin($response);

        } catch (\Exception $e) {
            echo "Login failed: " . $e->getMessage() . "\n";
            return false;
        }
    }

    /**
     * Get CSRF token from various possible locations
     */
    private function getCSRFToken(Crawler $crawler): string {
        // Try meta tag first
        try {
            return $crawler->filter('meta[name="csrf-token"]')->attr('content');
        } catch (\Exception $e) {
            // Then try hidden input field
            try {
                return $crawler->filter('input[name="_token"]')->attr('value');
            } catch (\Exception $e) {
                // Finally check cookies
                foreach ($this->browser->getCookieJar()->all() as $cookie) {
                    if ($cookie->getName() === 'XSRF-TOKEN') {
                        return urldecode($cookie->getValue());
                    }
                }
            }
        }
        throw new \RuntimeException('CSRF token not found');
    }

    /**
     * Verify login success by checking protected content
     */
    private function verifyLogin(Crawler $response): bool {
        try {
            // Wait for page load
            sleep(3);

            // Request protected page
            $dashboardCrawler = $this->browser->request('GET', $this->baseUrl . '/dashboard');

            // Check for success indicators
            return $dashboardCrawler->filter('#challenge-description')->count() > 0;

        } catch (\Exception $e) {
            return false;
        }
    }

    /**
     * Print current cookies for debugging
     */
    public function printCookies(): void {
        foreach ($this->browser->getCookieJar()->all() as $cookie) {
            printf(
                "Cookie: %s = %s (Domain: %s, Secure: %s)\n",
                $cookie->getName(),
                substr($cookie->getValue(), 0, 30) . '...',
                $cookie->getDomain(),
                $cookie->isSecure() ? 'Yes' : 'No'
            );
        }
    }
}

// Usage Example
try {
    $scraper = new LoginScraper();

    if ($scraper->login()) {
        echo "Login successful!\n";
        echo "\nCurrent cookies:\n";
        $scraper->printCookies();
    } else {
        echo "Login failed.\n";
    }
} catch (\Exception $e) {
    echo "Error: " . $e->getMessage() . "\n";
}

Dealing with AJAX-Loaded Content

Since Goutte doesn’t support JavaScript execution, handling AJAX-loaded content requires alternative methods:

  • Pairing with Headless Browsers: Use Symfony Panther or Puppeteer for dynamic content scraping.
  • Scraping API Endpoints: For sites with AJAX requests, inspect the page’s network activity to identify and directly scrape API endpoints, often a more efficient option.

Error Handling and Debugging

Goutte provides methods to manage HTTP errors, helping to handle scenarios like timeouts, broken links, and unexpected response formats.

try {
    $crawler = $client->request('GET', 'https://www.scrapingcourse.com/');
    if ($client->getResponse()->getStatus() !== 200) {
        throw new Exception("Error: HTTP response " . $client->getResponse()->getStatus());
    }
} catch (\Exception $e) {
    echo "Request failed: " . $e->getMessage();
}

Pagination and Recursive Scraping

Scraping paginated content with Goutte involves following “Next” buttons or links. For infinite scroll or dynamically generated pagination, investigate AJAX requests to directly access paginated data.

// Example for handling pagination
while ($crawler->selectLink('Next')->count() > 0) {
    $crawler->selectLink('Next')->link();
    // Process content on each page
}

Rate Limiting and Throttling

Implement rate limiting to avoid IP blocks or rate limits:

  • Pause Between Requests: Use sleep() to pause briefly between requests.
  • Exponential Backoff: Gradually increase delay if requests start failing.
foreach ($urls as $url) {
    $crawler = $client->request('GET', $url);
    sleep(rand(1, 3)); // Random delay
}

Scraping Large Datasets

For large-scale scraping, break down the process into manageable tasks:

  • Task Queues: Use queues like RabbitMQ or Beanstalkd to manage task distribution.
  • Multi-threading in PHP: curl_multi_exec() or fork allows concurrent scraping for faster execution.

Handling Anti-Scraping Mechanisms

To bypass anti-scraping mechanisms like CAPTCHAs, IP blocking, and user-agent detection:

  • Rotating Proxies and User-Agents: Use services like Scrape.do to handle rotating proxies and set up custom headers.
  • Anti-CAPTCHA Solutions: Integrate with anti-captcha providers for automated CAPTCHA solving.
$client->setHeader('User-Agent', 'YourCustomUserAgent');
$client->setServerParameter('HTTP_X_FORWARDED_FOR', 'Rotated_IP');

Logging and Monitoring Scraping Jobs

Implement logging using Symfony’s Monolog for effective tracking of scraping activities:

  • Logging: Log activity, errors, and request details.
  • Performance Monitoring: Benchmark scraping speed and track server load.
use Monolog\Logger;
use Monolog\Handler\StreamHandler;

$log = new Logger('goutte_logger');
$log->pushHandler(new StreamHandler('path/to/your.log', Logger::WARNING));
$log->warning('Scraping started...');

Data Storage and Exporting

Save and export scraped data to desired formats:

  • Export to CSV: Store data locally in CSV files.
  • Database Integration: Directly save data to databases like MySQL or PostgreSQL for scalable data management.
// Export scraped data to CSV
$file = fopen('scraped_data.csv', 'w');
foreach ($scrapedData as $row) {
    fputcsv($file, $row);
}
fclose($file);

Let’s look at a more comprehensive example.

<?php
require 'vendor/autoload.php';

use Symfony\Component\BrowserKit\HttpBrowser;
use Symfony\Component\HttpClient\HttpClient;
use Symfony\Component\DomCrawler\Crawler;
use Symfony\Component\BrowserKit\Cookie;
use Monolog\Logger;
use Monolog\Handler\StreamHandler;
use Monolog\Handler\RotatingFileHandler;
use Monolog\Formatter\LineFormatter;

class LoginScraper {
    private HttpBrowser $browser;
    private string $baseUrl = 'https://www.scrapingcourse.com';
    private Logger $logger;

    public function __construct() {
        // Set up logger
        $this->setupLogger();

        $this->logger->info('Initializing LoginScraper');

        // Initialize browser with common headers
        $this->browser = new HttpBrowser(
            HttpClient::create([
                'headers' => [
                    'User-Agent' => 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/91.0.4472.124',
                    'Accept' => 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
                    'Accept-Language' => 'en-US,en;q=0.5',
                    'Origin' => $this->baseUrl,
                    'Referer' => $this->baseUrl . '/login',
                ]
            ])
        );

        $this->logger->debug('Browser initialized with headers', [
            'headers' => [
                'User-Agent' => 'Mozilla/5.0',
                'Accept' => 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
                'Accept-Language' => 'en-US,en;q=0.5'
            ]
        ]);
    }

    /**
     * Set up logger with file and console output
     */
    private function setupLogger(): void {
        $this->logger = new Logger('login_scraper');

        // Create console handler
        $consoleHandler = new StreamHandler('php://stdout', Logger::DEBUG);
        $consoleFormat = new LineFormatter(
            "[%datetime%] %level_name%: %message% %context% %extra%\n",
            "Y-m-d H:i:s"
        );
        $consoleHandler->setFormatter($consoleFormat);

        // Create file handler with rotation
        $fileHandler = new RotatingFileHandler(
            __DIR__ . '/logs/scraper.log',
            0,
            Logger::DEBUG,
            true,
            0644
        );
        $fileFormat = new LineFormatter(
            "[%datetime%] %channel%.%level_name%: %message% %context% %extra%\n",
            "Y-m-d H:i:s"
        );
        $fileHandler->setFormatter($fileFormat);

        // Add handlers to logger
        $this->logger->pushHandler($fileHandler);
        $this->logger->pushHandler($consoleHandler);
    }

    /**
     * Handle login process with session management
     */
    public function login(string $email = '[email protected]', string $password = 'password'): bool {
        try {
            $this->logger->info('Starting login process', ['email' => $email]);

            // Step 1: Get the login page and CSRF token
            $this->logger->debug('Requesting login page');
            $crawler = $this->browser->request('GET', $this->baseUrl . '/login');

            $token = $this->getCSRFToken($crawler);
            $this->logger->debug('CSRF token obtained', ['token_length' => strlen($token)]);

            // Step 2: Prepare and submit the login form
            $this->logger->debug('Preparing login form submission');
            $form = $crawler->filter('#login-form')->form();
            $form['email'] = $email;
            $form['password'] = $password;

            // Submit form and get response
            $this->logger->info('Submitting login form');
            $response = $this->browser->submit($form);

            // Step 3: Verify login and check protected content
            $this->logger->debug('Verifying login success');
            $success = $this->verifyLogin($response);

            $this->logger->info(
                $success ? 'Login successful' : 'Login failed',
                ['success' => $success]
            );

            return $success;

        } catch (\Exception $e) {
            $this->logger->error('Login process failed', [
                'error' => $e->getMessage(),
                'file' => $e->getFile(),
                'line' => $e->getLine()
            ]);
            return false;
        }
    }

    /**
     * Get CSRF token from various possible locations
     */
    private function getCSRFToken(Crawler $crawler): string {
        $this->logger->debug('Attempting to get CSRF token');

        // Try meta tag first
        try {
            $token = $crawler->filter('meta[name="csrf-token"]')->attr('content');
            $this->logger->debug('Found CSRF token in meta tag');
            return $token;
        } catch (\Exception $e) {
            $this->logger->debug('CSRF token not found in meta tag, trying hidden input');

            // Then try hidden input field
            try {
                $token = $crawler->filter('input[name="_token"]')->attr('value');
                $this->logger->debug('Found CSRF token in hidden input');
                return $token;
            } catch (\Exception $e) {
                $this->logger->debug('CSRF token not found in hidden input, checking cookies');

                // Finally check cookies
                foreach ($this->browser->getCookieJar()->all() as $cookie) {
                    if ($cookie->getName() === 'XSRF-TOKEN') {
                        $this->logger->debug('Found CSRF token in cookie');
                        return urldecode($cookie->getValue());
                    }
                }
            }
        }

        $this->logger->error('CSRF token not found in any location');
        throw new \RuntimeException('CSRF token not found');
    }

    /**
     * Verify login success by checking protected content
     */
    private function verifyLogin(Crawler $response): bool {
        try {
            $this->logger->debug('Waiting for page load');
            sleep(3);

            $this->logger->debug('Requesting dashboard page');
            $dashboardCrawler = $this->browser->request('GET', $this->baseUrl . '/dashboard');

            $hasChallenge = $dashboardCrawler->filter('#challenge-description')->count() > 0;
            $this->logger->debug('Challenge description check', ['found' => $hasChallenge]);

            return $hasChallenge;

        } catch (\Exception $e) {
            $this->logger->error('Login verification failed', [
                'error' => $e->getMessage()
            ]);
            return false;
        }
    }

    /**
     * Export data to CSV file
     */
    public function exportToCSV(array $data, string $filename): bool {
        try {
            $this->logger->info('Starting CSV export', ['filename' => $filename]);

            if (empty($data)) {
                $this->logger->warning('No data to export');
                return false;
            }

            $fp = fopen($filename, 'w');

            // Write headers
            fputcsv($fp, array_keys($data[0]));

            // Write data rows
            foreach ($data as $row) {
                fputcsv($fp, $row);
            }

            fclose($fp);

            $this->logger->info('CSV export completed', [
                'filename' => $filename,
                'rows' => count($data)
            ]);

            return true;
        } catch (\Exception $e) {
            $this->logger->error('CSV export failed', [
                'error' => $e->getMessage(),
                'filename' => $filename
            ]);
            return false;
        }
    }

    /**
     * Export data to JSON file
     */
    public function exportToJSON(array $data, string $filename): bool {
        try {
            $this->logger->info('Starting JSON export', ['filename' => $filename]);

            $success = file_put_contents(
                $filename,
                json_encode($data, JSON_PRETTY_PRINT | JSON_UNESCAPED_UNICODE)
            );

            if ($success) {
                $this->logger->info('JSON export completed', [
                    'filename' => $filename,
                    'size' => filesize($filename)
                ]);
                return true;
            }

            return false;
        } catch (\Exception $e) {
            $this->logger->error('JSON export failed', [
                'error' => $e->getMessage(),
                'filename' => $filename
            ]);
            return false;
        }
    }

    /**
     * Scrape and store challenge data
     */
    public function scrapeChallengeData(): ?array {
        try {
            if (!$this->login()) {
                $this->logger->error('Failed to login for data scraping');
                return null;
            }

            $this->logger->info('Successfully logged in, getting challenge data');

            // Wait for page load
            sleep(3);

            // Get dashboard page
            $crawler = $this->browser->request('GET', $this->baseUrl . '/dashboard');

            // Extract challenge data
            $data = [
                'timestamp' => date('Y-m-d H:i:s'),
                'challenge_title' => $crawler->filter('#challenge-title')->text('N/A'),
                'challenge_description' => $crawler->filter('#challenge-description')->text('N/A'),
            ];

            $this->logger->info('Challenge data scraped successfully');

            return $data;

        } catch (\Exception $e) {
            $this->logger->error('Failed to scrape challenge data', [
                'error' => $e->getMessage()
            ]);
            return null;
        }
    }
}

// Extended usage example
   // Extended usage example
   try {
    $scraper = new LoginScraper();

    // Scrape data
    $challengeData = $scraper->scrapeChallengeData();

    if ($challengeData) {
        echo "Data scraped successfully!\n";

        // Export to CSV
        if ($scraper->exportToCSV([$challengeData], 'challenge_data.csv')) {
            echo "Data exported to CSV successfully!\n";
        }

        // Export to JSON
        if ($scraper->exportToJSON([$challengeData], 'challenge_data.json')) {
            echo "Data exported to JSON successfully!\n";
        }
    }

} catch (\Exception $e) {
    echo "Error: " . $e->getMessage() . "\n";
}

Integrating with Scrape.do for Enhanced Scraping

Scrape.do offers several features that can enhance Goutte’s functionality for large-scale or complex scraping tasks:

  • Proxy Management: Rotate IPs to bypass IP blocking.
  • CAPTCHA Handling: Automatically avoid or solve CAPTCHAs using Scrape.do’s CAPTCHA solutions.
  • Rate Limiting: Scrape.do’s throttling helps manage request rates, reducing the chance of blocking.

Integrating Scrape.do with Goutte is straightforward—use Scrape.do’s proxy URL and API to handle challenging requests and ensure smooth data extraction.

Start FREE with 1000 credits NOW.