How do Websites Detect Bots Using Bot Mitigation Tools

Many websites deploy anti-bot technologies to prevent bots from scraping. But how do websites detect bots and block them? What technologies are used for bot detection when web scraping?

This article will answer your questions regarding bot detection. It covers the most commonly adopted bot protection techniques and how you can bypass these bot detection methods.

How Do Websites Detect and Prevent Bots From Scraping

Bots and humans can be distinguished based on their characteristics or their behavior. Websites, or the anti-scraping services websites employ, analyze the characteristics and behavior of visitors to distinguish the type of visitor.

These tools and products construct basic or detailed digital fingerprints based on the characteristics and interactions of these visitors with the website. This data is all compiled, and each visitor is assigned a likelihood of being a human or bot and either allowed to access the website or denied access.

This detection is done either as installed software or by service providers bundling this service into their CDN-type service or pure cloud-based subscription offerings that prevent bot traffic to a website before allowing access to anyone.

Where Can Websites Detect Scraper Bots? 

The detection can happen at the client side (i.e., your browser running on your computer) or the server side (i.e., the web server or inline anti-bot technologies prevent bot traffic) or a combination of the two.

Web servers use different methods to identify and prevent bots from scraping. Some methods work by detecting the bots before they can get to the server, while others use services from the cloud. These cloud services can work in two ways: they either filter out the bots before they reach the website or they work together with the web server, using help from outside to detect scraper bots.

The problem is that this detection has false positives and ends up detecting and blocking regular people as bots or adding so much processing overhead that it makes the site slow and unusable. These technologies do come with costs (financial and technical) and have these trade-offs to consider.

Here are some of the areas where detection can occur:

  1. Server-side fingerprinting with behavior analysis
  2. Client-side or browser-side fingerprinting with behavior analysis
  3. A combination of both, spread across multiple domains and data centers

Server-Side Bot Detection

Server-side bot detection starts at the server level, which is on the web server of the website or devices of cloud-based services. There are a few types of fingerprinting methods that are usually used in combination to detect scraper bots from the server side.

Note that fingerprinting has a detrimental impact on global privacy because it allows seamless tracking of individuals across the Internet. Various server side bot detection methods

Various server-side bot detection methods include:

  • HTTP Fingerprinting
  • TCP/IP stack fingerprinting
  • TLS Fingerprinting
  • Behavior analysis and pattern detection

HTTP Fingerprinting

HTTP fingerprinting is done by analyzing the traffic a visitor to a website sends to the web server. Almost all of this information is accessible to the web server, and some of it can also be seen in the web server logs. It can reveal basic information about a visitor to the site, such as the

  1. User-Agent

    User-Agent gives information about the kind of browser, i.e., whether it is Chrome, Firefox, Edge, Safari, etc., and its version.
  2. Request Headers

    Request headers such as referer, cookie, which the encoder accepts, whether they accept gzip compression, etc.
  3. The Order of the Headers

    It is the sequence in which headers are sent that can reveal information about the browser’s configuration or operational environment.
  4. The IP Address

    The IP address of the visitor who made the request from or finally accessed the web server (in case the visitor is using an ISP-based NAT address or proxy server).

HTTP fingerprinting is important for understanding visitor behavior, detecting bots, and enhancing security. It identifies and mitigates potential threats based on the unique digital signatures of web traffic.

TCP/IP Stack Fingerprinting

TCP/IP stack fingerprinting is the data a visitor sends to servers that reaches the server as packets over TCP/IP. The TCP stack fingerprint has details such as:

  1. Initial packet size (16 bits)
  2. Initial TTL (8 bits)
  3. Window size (16 bits)
  4. Max segment size (16 bits)
  5. Window scaling value (8 bits)
  6. “don’t fragment” flag (1 bit)
  7. “sackOK” flag (1 bit)
  8. “nop” flag (1 bit)

These variables are combined to form a digital signature of the visitor’s machine that has the potential to uniquely identify a visitor—bot or human.

Open-source tools such as p0f can tell if a User-Agent is being forged. It can even identify whether a visitor to a website is behind a NAT network or has a direct connection to the internet based on the browser’s settings, such as language preferences, etc.

TCP/IP stack fingerprinting is essential as it analyzes the details to figure out more about the computers trying to connect to or communicate with your system. It helps with security, managing network access, and detecting potential intruders.

TLS Fingerprinting

When a site is accessed securely over the HTTPS protocol, the web browser and a web server generate a TLS fingerprint during an SSL handshake. Most client User-Agents, such as different browsers and applications such as Dropbox, Skype, etc., will initiate an SSL handshake request in a unique way that allows for that access to be fingerprinted.

The open-source TLS fingerprinting library JA3 gathers the decimal values of the bytes for the fields mentioned in the client hello packet during an SSL handshake:

  1. SSL version
  2. Accepted ciphers
  3. List of extensions
  4. Elliptic curves
  5. Elliptic curve formats

It then combines those values together in order, using a “,” to delimit each field and a “-” to delimit each value in each field. These strings are then MD5 hashed to produce an easily consumable and shareable 32-character fingerprint. This is the JA3 SSL client fingerprint. MD5 hashes also have the benefit of speed in generating and comparing values and are very unique hashes.

TLS fingerprinting is used for various purposes, such as improving security by identifying and blocking connections from known malicious software or tracking users across websites without using traditional cookies. However, like browser fingerprinting, it also raises privacy concerns because it can be done without users’ knowledge or consent.

Behavior Analysis and Pattern Detection

Once a unique fingerprint is constructed by combining all the above, bot detection tools can trace a visitor’s behavior in a website or across many websites – if they use the same bot detection provider. They perform behavioral analysis on the browsing activity, which is usually

  1. The pages visited
  2. The order of pages visited
  3. Cross matching HTTP referer with the previous page visited
  4. The number of requests made to the website
  5. The frequency of requests to the website

This allows the anti-bot products to decide if a visitor is a bot or human based on the data they have seen previously and in some cases sends a problem such as a CAPTCHA to be solved by the visitor.

If the visitor solves the CAPTCHA – the visitor might be recognized as a user and if the CAPTCHA fails gets flagged as a bot and blocked.

Now, any requests that come from these fingerprints – HTTP, TCP, TLS, IP Address etc to any of the websites that use the same bot detection service will challenge visitors to prove themselves as a human.

The visitor or their IP address is usually kept in a blacklist for a certain period of time, and then removed from it if they do not see any more bot activity. Sometimes, persistently abused IP addresses are permanently added to global IP blocklists and denied entry to many sites.

It is relatively easier to bypass server side bot detection if web scrapers are fine tuned to work with the websites being scraped.

Note:  The best way to understand every aspect of the data that moves between a client and a server as part of a web request is to use a proxy server in the middle, such as MITM, or look at the network tab of a web browser’s developer toolbar (accessed by F12 in most cases).

For deeper analysis beyond HTTP and lower down the TCP/IP stack, you can also use Wireshark to check the actual packets, headers, and all the bits that go back and forth between the browser and the website. Any or all of those bits can be used to identify a visitor to the website and consequently help fingerprint them.

Client-Side Bot Detection (Browser-Side Bot Detection)

Almost all of the bot detection services use a combination of browser-side detection and server-side detection to accurately block bots.

The first thing that happens when a site starts client-side detection is that all scrapers that are not real browsers will get blocked immediately.

The simplest check is if the client (web browser) can render a block of JavaScript. If it doesn’t, the detection pretty much flags the visitor as a bot.

While it is possible to block running JavaScript in the browser, most of the Internet sites will be unusable in such a scenario, and as a result, most browsers will have JavaScript enabled.

Once this happens, a real browser is necessary in most cases to scrape the data. There are libraries to automatically control browsers, such as

Browser-side bot detection usually involves constructing a fingerprint by accessing a wide variety of system-level information through a browser. This is usually invoked through a tracker JavaScript file that executes the detection code in the browser and sends back information about the browser and the machine running the browser for further analysis.

As an example, the navigator object of a browser exposes a lot of information about the computer running the browser. Here is a look at the expanded view of the navigator object of a Safari browser:

Expanded view of the navigator object of a Safari browser

Below are some common features used to construct a browser’s fingerprint.

  1. User-Agent
  2. Current language
  3. Do not track status.
  4. Supported HTML5 features
  5. Supported CSS rules
  6. Supported JavaScript features
  7. Plugins installed in the browser
  8. Screen resolution, color depth
  9. Time zone
  10. perating system
  11. Number of CPU cores
  12. GPU vendor name and rendering engine
  13. Number of touch points
  14. Different types of storage support in browsers
  15. HTML5 canvas hash
  16. The list of fonts has been installed on the computer

Apart from these techniques, bot detection tools also look for any flags that can tell them that the browser is being controlled through an automation library.

  1. Presence of bot-specific signatures
  2. Support for non-standard browser features
  3. Presence of common automation tools such as Selenium, Puppeteer, Playwright, etc.
  4. Human-generated events such as randomized mouse movement, clicks, scrolls, tab changes, etc.

All this information is combined to construct a unique client-side fingerprint that can tag one as a bot or a human.

Bypassing the Bot Detection, Bot Mitigation, and Anti-Scraping Services

Developers have found many workarounds to fake their fingerprints and conceal that they are bots. For example

  • Puppeteer Extra
  • Patching Selenium/PhantomJS
  • Fingerprint Rotation

But bot detection companies have been improving their AI models and looking for variables, actions, events, etc. that can still give away the presence of an automation library. Most poorly built scrapers will get banned with these advanced (or “military-grade”) bot detection systems.

To bypass such military-grade systems and scrape websites without getting blocked, you need to analyze what each of their JavaScript trackers does on each website and then build a custom solution to bypass them. Each bot detection company works with a different set of variables and behavioral flags to find bots.

Wrapping Up

Websites use a variety of bot detection and mitigation tools at different levels to prevent web scraping. To overcome such challenges, you require knowledge of website and network administration. At ScrapeHero, we can provide you with web scraping products and services that prevent you from being blacklisted.

If simple data scraping is your need, then consider making use of ScrapeHero Cloud, which offers pre-built crawlers and APIs. Since it is hassle-free, affordable, fast, and reliable, you can use it for scraping even without extensive technical knowledge.

For enterprise-grade web scraping concerns, it is better to consult ScrapeHero web scraping services, which are bespoke, custom, and more advanced. We are experts in overcoming the challenges of anti-bot techniques and providing the data you need.

Frequently Asked Questions

1. How to detect web scraping?

Web scraping can be detected through various methods, such as identifying unusual traffic patterns, detecting non-standard headers, monitoring the rate of requests, identifying the use of headless browsers, etc.

2. How does Google detect bots?

Google employs techniques to detect and block bot activity across its services. Some methods include rate and pattern analysis, CAPTCHA challenges, User-Agent and header analysis, behavioral analysis, etc.

3. How to block bots from websites?

Websites detect bots using various techniques. They can also deploy specific measures to block or prevent bot traffic. These methods range from simple checks to sophisticated analyses that involve machine learning, such as IP reputation, resource loading, honeypots, etc.

4. How do you bypass bot detection in web scraping?

To bypass bot detection in web scraping, many mechanisms, such as respecting robots.txt, limiting request rates, using headers and rotating user agents, handling CAPTCHAs, etc., are used.

We can help with your data or automation needs

Turn the Internet into meaningful, structured and usable data



Please DO NOT contact us for any help with our Tutorials and Code using this form or by calling us, instead please add a comment to the bottom of the tutorial page for help

Posted in:   Featured, Tips and Techniques, Tutorials

Turn the Internet into meaningful, structured and usable data   

ScrapeHero Logo

Can we help you get some data?