HUMAN is Named a Leader and Earns Top Scores in Nine Criteria in the Forrester Wave™: Bot Management Software, Q3 2024
HUMAN Blog

Me, You, and the GPU

Bad bots are prevalent on the internet, constantly becoming more sophisticated and nuanced, hurting both users and businesses alike. Anti-bot solutions must not only keep up, but also plunge ahead in order to keep the bot problem at bay. Leading the way in anti-bot solutions, HUMAN seeks to utilize every technology available within our scope to detect and mitigate. This article will introduce our latest GPU-based invisible challenge in the context of the threat landscape.

Threat Landscape Mapping: What vs. How

There are multiple approaches to categorizing bad bots, each with its own target audience. When we want to understand the threat landscape a particular customer is facing, we will usually classify bots according to their purpose: 

  • Scrapers that extract data from sites
  • Transactional bots / scalpers that perform online purchases quickly and automatically
  • Ad fraud bots that generate fake clicks and views, fraudulently raising revenue in some cases, or manipulating the numbers in order to drive up their targets’ costs.
  • Denial bots that prevent their target site from operating consistently by either denying service from users, or create an artificial lack of inventory for in demand products.
  • Data stuffing / cracking bots include brute-force type of attacks like credential stuffing, credential cracking, ATO, carding, etc.
  • etc.

A more in-depth breakdown is available here.

Another approach is to group bots by their technological abilities, using categories representing increasing sophistication.

1st Generation (Gen1): Contextless Scripts

Gen1 bots are scripts that send requests without context; the request doesn’t contain any previously obtained context like cookies, and there is no session correlation between requests. These types of bots include scrapers, DoS, and some types of data stuffing bots.

While one of the most rudimentary attacks, it is also one of the most pervasive attacks found in the wild, as the barrier to entry for attackers is very low, and when used for scraping, it can become of the hardest attacks to block when done right: an attacker who manages to make the request appear to be coming from a browser is very likely to succeed with the single first request. Scaling up this type of attack is also possible, but requires more randomization in order to avoid detection by grouping similar requests.

The Achilles heel of these attacks is that they do not run any front-end logic – namely JavaScript. They do not set cookies that are sent in the response headers, nor do they load any secondary resources found on the page. Although it’s easy to identify these types of attacks, identification only happens after the response has been served, by which point the attacker has already obtained the data they were after.

These types of bots can be dealt with by grouping similar requests once the attack has been identified (i.e. after the 1st request passed), or by requiring front-end action/interaction to be completed before the data is supplied (i.e. an interstitial page).

2nd Generation (Gen2): Scripts with Context

Gen2 bots add context to Gen1 bots by including a cookie (fake, stolen, or obtained by other tools). They may also maintain a consistent session between requests, process data programmatically and use it in the next requests. These include scrapers that scrape behind a login, simple transactional bots, and scalpers.

This type of bot is quite similar to the previous generation, in that it runs as a script without a browser, but it distinguishes itself by saving session data between requests, such as cookies set by headers (set-cookie) and adding relevant referer headers, etc. It shares the same Achilles heel as the previous generation, but with some improvements, such as extracting resources found on the page and creating a request for them as well (to make the session seem more like a real browser).

The same detection tactics can be applied to this generation of bots.

3rd Generation (Gen3): Automated Browsers

Gen3 bots are automation frameworks that use real browsers, both in headless and headed modes, to interact with the targeted sites. These reduce attackers’ requirements of spoofing actual browser signals, but also have a higher overhead for both maintenance and cost of resources. These types of bots can service all types of attacks.

The main challenge for attackers using this type of bots is removing traces of the automation frameworks from the session, and reducing overhead to allow scaling up the attacks. Simple interactions with the targeted site — such as clicking elements, inputting text, and running JS in the browser — are baked into the framework, but these open up another detection vector for defenders: user interaction profile.

By analyzing mouse movements, keyboard interactions, and other user-controlled behaviors, along with frameworks’ tell-tale signs, scripted interactions can be reliably identified.

4th Generation (Gen4): Fake User Interaction and Captcha Solving

Gen4 bots are an extension of Gen3 bots, but also include behavioral elements that attempt to mimic a real user’s usage patterns across an entire session – from mouse movements to fully automating solving CAPTCHAs.

How does these categories help us face the threat landscape?

From a technical standpoint, once a solution is developed to hinder an entire bot generation category, it will catch bots regardless of the bot’s purpose, and its effectiveness will carry across verticals.

One such solution is an invisible challenge.

Invisible Challenges

A well-known problem in applied security: usually, the more measures we implement, the safer a property gets (more or less), but more measures also create more user friction, which may result in the users themselves circumventing the restrictions or abandoning the task altogether when it becomes too much.

When a computer locks itself after three minutes of inactivity, requires a password with 16 characters to open, and that password must be changed every other week - you’ll quickly find people keeping their password on a post-it note attached to their screen. 😅

Bot protection is in this same boat: we want to keep our customers—the site owners—safe from bots while  allowing a smooth visit for end-users without encumbering them unnecessarily. In some cases, depending on the business model and type of user interaction, site owners would prefer to reduce the application of security mechanisms so as to not irritate their customers, leaving a gap that attackers can take advantage of.

So while the “challenge” in the invisible challenge is the security mechanism, the “invisible” part describes how the user will experience this challenge.

There are different approaches to an invisible challenge, each targeting a specific problem:

  • Run some simple JavaScript code and verify its output matches the expectations.
    This challenge targets Gen1 bots and forces threat actors to move up to Gen2 and beyond for an attack.
  • Collect user behavior metrics for a short period of time and verify there’s no anomalous behavior.
    This challenge targets Gen2 and Gen3 bots that are unable to fake session signals properly.
  • Run a resource intensive operation (Proof-of-Work).
    This challenge targets all generations of bots, and if done properly, can raise the cost of scaling up the attacks, slowing them down and reducing their financial viability.

At HUMAN, we use all types of invisible challenges to mitigate attacks. Our current Proof of Work (PoW) challenge—which holds a strong resemblance to its cryptocurrency origin—definitely packs a punch (PoW…get it?) when slowing down attackers. And as the cost of scaling up machines with many/strong CPUs lowers, our latest advancements include running a challenge on the GPU, which are still expensive to scale up.

So what is the GPU-based PoW?

Let Me Draw You a Picture

Before we dive into the challenge itself, let’s briefly introduce the browser APIs that are involved, namely the canvas and WebGL. The canvas API provides drawing capabilities using JS and the HTML’s <canvas> tag. It provides different contexts depending on the type of drawing we want: its default is direct 2D drawing, but it also supports the WebGL API which supports drawing hardware-accelerated 2D and 3D graphics. Drawing using WebGL’s OpenGL Shaders Language (GLSL) allows us to utilize the GPU instead of the CPU.

Back to our challenge, the main functionality is animating an image on a WebGL canvas using GLSL. The process contains several parameters, and each iteration of animation is done with a slight change in these parameters. At the end of each drawing iteration, the image is reduced to a hash—a series of numbers representing the image—and compared against a target hash.

The number of iterations, pixels involved, and drawing sophistication comprise the varied difficulty of the challenge. Our initial evaluation of how likely it is for the session to be controlled by a bot automatically determines the difficulty of the challenge offered.

Working with GLSL in the browser can be quite tricky when requiring a predictable state: since different GPUs have a different level of precision, some calculations would end up being rounded differently on different machines and browsers. The challenges in creating this GPU-based PoW were not only finding a way to get consistent results regardless of the GPU and OS, but also to design it in such a way that it will only be consistent when running in a browser, as opposed to in other JS-enabled environments.

The last part that completes the picture of how the challenge is deployed is setting the challenge’s difficulty according to how “suspicious” the session is: when a regular user browses the site, they’ll get a simple challenge that’s quickly solvable, which won’t burden their hardware. But when we identify suspicious signals coming from the session, the difficulty will rise accordingly. The end result will be that regular users will not be aware that any challenge has taken place in the background, while automated sessions will find themselves having to solve more difficult challenges that take more time and computing power.

Conclusion

Bots present a threat to both site owners and end users, and for each type of threat, appropriate mitigation techniques are necessary. By pivoting our focus from the type of threat to the type of technology involved, we can improve detection across many different types of attacks. An invisible challenge reduces the friction between end users and the sites, while providing us with an opportunity to block not only early generations of bots, but to make it less financially viable for later generation bots to attack in scale, reducing the effects of the attacks and the attackers’ inclination to invest more in attacking a well-protected target.