Good Bot, Bad Bot: To Block or Not to Block. That is the Question—and the Answer

Bots are tools, neither good nor bad. It’s how they’re used that makes them beneficial, benign, or malicious. Because they’ve been associated so much with the latter, “bot” has become something of a dirty word.

Enterprise security systems tend to target bots across the board, even if their intended purpose is helpful to users or businesses. Unfortunately, this can materially damage business operations in the name of security. Enterprises must take their defenses up another notch and find an efficient way to discriminate between good bots and bad while maintaining a strong security posture.

Bad Bots on the Rise

The percentage of Internet traffic made up of bots is generally accepted to be about 40%, with estimates ranging from 25% to 50%. Whatever the actual percentage, the number of “bad bots”—those that carry out credential stuffing attacks, steal data, spread fake content via comment spam, or skew advertising metrics—continues to grow.

A recent study by Imperva found that, while overall bot traffic actually fell as a percentage of web traffic in 2019, bad bot activity jumped by more than 18% to 24% of all traffic.

Against this backdrop, it’s not surprising that security tools target bots just for being bots, but remember, “bots” are simply automated processes. Put more literally, they are nothing more than software robots. Good bots are useful tools that perform critical business services and shouldn’t summarily be thrown out with the bathwater. Consumers enjoy the benefits of bots that search for the lowest prices for goods or services, remind when a scheduled appointment is approaching, or verbally answer questions in the voice of Siri or Alexa.

Other mission-critical software robots monitor the health of websites, aggregate content, deliver market intelligence, and generally free up information technology staff for different tasks. As companies continue to raise their digital profile and employ more automation, “good bot” traffic will necessarily increase.

So, how can enterprises separate the good from the bad; allowing good bots to work their convenience magic and keeping bad bots at bay?

Know the Enemy

It starts by understanding how bad bots operate. ThreatX’s research team recently analyzed 90 days of HTTP(s) traffic for several hundred production web apps, spanning the Americas, Europe, and Asia. The most common type of malicious bots was the type looking for quick entry, using various forms of scanning, injection, and remote command execution.

These bad bots attempted between one and five probing attacks within about three seconds, then vanished. This behavior suggests they were looking for vulnerable targets without expending much effort or expense.

The most persistent attacks found in this research effort came from people using semi-automated requests. They often involved the reuse of leaked or stolen usernames and passwords (a technique known as “credential stuffing”). Similar to the example above, automated injection of username/password pairs can be used to test stolen credentials and ultimately allow the attacker to take over the account(s).

A close second in prevalence were API attacks, targeting a narrow subset of URLs within a site, with requests coming in bursts to a single URL at a rate of several per second, followed by a pause, mimicking typical web traffic.

Both of these attack patterns are clever and reinforce our founding belief that security is a never-ending cat-and-mouse game. Static, rules-based WAFs simply cannot keep up with these types of attacks. Aggregated monitoring over time and behavior analysis using sources like compromised credentials and known signatures are necessary to unmask these types of attack patterns.

While human-initiated attacks have not disappeared, they were conspicuously scarce in our sample and often seemed to mimic vulnerability scanners, claiming to originate from legitimate security research firms.

Traffic Control

The proliferation of bad bot activity is undeniable, but because a significant portion of legitimate web traffic is bot traffic, security teams need to adjust their approach. To prevent unintended outages of critical business systems, whether automated or not, security teams must drive False Positives (FP) to near zero to allow legitimate traffic to pass while blocking malicious actors. The best practices are as follows:

  • Find and remediate misconfigurations, which are often a big contributor to web application firewall (WAF) false positives. 
  • Employ rate-limiting to help prevent DoS attacks and protect API services from being overwhelmed, either by malicious users or over-zealous retry loops. 
  • Develop a positive security model that requires both humans and bots to identify themselves via metadata fields to explicitly differentiate between beneficial and unsanctioned traffic.
  • Regularly monitor the effects of security solutions on legitimate traffic—whether human or bot—for signs that security practices are hindering business operations.

As automation becomes an essential element of running a successful business—from cybersecurity testing to automated software deployment, from social media likes to everyday shopping—bots will become an intrinsic element of the environment. Further complicating the situation, lines between human and bot will continue to blur. Applications will become more sophisticated and semi-automated, yielding to bot-assisted human activity that will look more and more bot-like.

Organizations must make the most of their web-based operations. To do that, they need a new approach to solve this unique problem of triaging good bots from the bad ones and treating each appropriately to shore up web defenses.

What’s Hot on Infosecurity Magazine?