We use affiliate links. They let us sustain ourselves at no cost to you.

The Main Web Scraping Techniques: A Practical Guide

Explore various web scraping techniques to improve your scraper.

Web-Scraping-Techniques

Web scraping isn’t easy since each website’s structure requires different methods to gather data successfully. Knowing which web scraping technique to use can help you avoid making unnecessary requests, find data nested in JavaScript elements, and extract only the specific elements you wish to scrape.

Continue reading this guide and find out various techniques for gathering data and how they can improve your web scraper.

Choosing The Right Tools for Your Project

Programming-minded users often build a scraper themselves using web scraping frameworks like Scrapy and Selenium or libraries like BeautifulSoup. You’ll find relevant libraries in various programming languages, but Python and Node.js generally have the best ecosystems.

Alternatively, you can offload some work by using a web scraping API. It’s a less complicated approach that lets you send requests to the API and simply store the output. Providers like Oxylabs, Smartproxy, or Bright Data offer commercial APIs to users.

If you’ll be using your own scraper at any larger scale, consider getting a proxy server to hide your IP address. This way, you’ll avoid IP blocks, CAPTCHAs, and other roadblocks along the way. If you’re going after major e-commerce shops or other well-protected websites, stick to residential proxies. Otherwise, datacenter proxies from cloud service providers will suffice. 

Popular Web Scraping Techniques

1. Manual Web Scraping

The most basic technique of data gathering is manual scraping. It includes copying content and pasting it into your dataset. Even though it’s the most straightforward way to collect information, it’s repetitive and time-consuming.

Websites target their efforts at stopping large-scale automated scripts. So, the one advantage of copy-pasting information by hand is that you won’t have to deal with strict rules imposed by the target website. Otherwise, if you need vast amounts of data, consider automated scraping.

2. HTML Parsing

When you want to get data from a website, you need to send an HTTP request to the target server, which then returns information in HTML. But raw HTML is hard for people to read. That’s where HTML parsing comes into play.

Generally, parsing means transforming data into an easy-to-read format like JSON or CSV. There are different ways to parse HTML, like regular expression parsing. But since HTML data is organized into a tree structure, it can be easily cleaned with path languages like CSS or XPath selectors. 

CSS selectors. These selectors are used to find nodes for styling a website, so they can easily select a set of HTML elements based on their names. You can target the elements by class, attribute, type, or ID. CSS selectors are supported by all web scraping libraries like Selenium, Puppeteer, or Cheerio. 

This method works best if you want to scrape a few elements from a page since you’ll only be able to navigate from parent to child elements.  You can find specific elements that contain the data you need by using inspect element

css-selectors
Finding the CSS selector by inspecting element

XPath selectors, or XML path, are a query language primarily used to get DOM elements from XML or HTML-based documents.  Similarly to CSS selectors, XPath tells you the location of a specific element and you don’t need to manually iterate through element lists. XPath selectors can traverse both parent to child and vice versa, so you have more flexibility when working with less structured websites. 

With the XPath method, you can scrape multiple pages at the same time. However, your scraper is more likely to break than CSS selectors since XPath is tied to the page’s structure. And web developers tend to change the HTML markup quite often. You can find the XPath selector by inspecting element

Finding the XPath Selector by inspecting element
Finding the XPath Selector by inspecting element

3. JSON for Linking Data

Web pages consist of HTML tags that tell a browser how to display the information included in the tag. Search engines parse through the HTML code to find logical sections. However, they have limited understanding; if the tag element doesn’t include additional tags, Google, Bing, Yahoo, or other search engines won’t be able to display your content correctly.

JavaScript Object Notation for Linked Data (JSON-LD) tag annotates elements embedded in the page and structures the data for search engines. Websites use it so that search engines can return more accurate results and improve how they’re represented in SERPs. You can find the JSON-LD (application/ld+json) tag in the <head> section of a page under <script> element, and use it in your script to extract the data.

Setting up a filter for XHR requests in the network tab

4. XHR Requests

Websites like social media load their content dynamically – they rely on elements nested in JavaScript. The XMLHttpRequest object, or XHR, is a JavaScript API that allows fetching dynamic data between a client and a server without having to load the entire page. This provides an uninterrupted browsing experience.

Before, XHRs were used only with XML, but today it supports any type of data, including JSON, which is the standard format. All modern browsers have a built-in XHR object. Since interactive websites often fetch elements via backend APIs, the data comes in JSON. So when you reverse engineer the API endpoint with XHRs, you’ll get structured data and use less bandwidth. 

To check if a website is reachable with XHR, set up a filter to show only XHR requests in the browser’s network tab.

Other Useful Methods to Improve Your Script

Cache HTTP Requests

When it comes to scraping multiple pages, you’ll have to build a scraper with crawling logic, which would go through thousands of URLs. However, once you know what pages have already been visited or need to revisit the same pages to get more data, you’ll need to cache HTTP requests. This technique allows you to store the response in a database which you can reuse for subsequent requests.

This method improves load performance since the server won’t need to parse or route every request again. Eliminating these steps reduces the load on the server, and there’s no need to re-download the same resource each time.

Canonical URLs

Some websites store several URLs that display the same content. For example, a site can include desktop and mobile versions, making the URL tag slightly different, yet your scraping bot recognizes the data as duplicate. A canonical URL is an HTML code snippet that defines the main version for duplicates or near-duplicates.

Canonical tags (rel=” canonical”) help developers and crawlers to specify which version of the same or similar content under different URLs is the main one. This way, you can avoid scraping duplicates. Web scraping frameworks like Scrapy handle the same URLs by default. You can find canonical tags within a web page’s <head> section.

Handling Redirects

HTML redirection or forwarding is a method to redirect users from one URL to another. HTML redirection confuses the scraper and causes slowdowns. Redirect responses have status codes that start with 3, and sometimes your scraper is trapped in a situation called an infinite redirect loop.

Python-based scraping libraries like Requests usually follow redirects by default and return to the final pages. Also, there’s an option to disable redirecting completely by writing the allow_redirects=False parameter along with the request. For example, you can disallow signing up, logging in, or using certain pages. Web scraping frameworks like Scrapy have middleware to handle page redirects.

Isabel Rivera
Isabel Rivera
Caffeine-powered sneaker enthusiast

Join Smartproxy’s webinar about ready-made scrapers on May 7, 10AM EST. Save your seat >