FunCaptcha Takes on the Bots

Looking to find a new way to block bots, SwipeAds has launched FunCaptcha, for, a global platform for aggregating and promoting online gaming servers. relies on user votes to rank servers across a variety of categories. Following a series of rapid and suspicious spikes in rankings, site managers sought to curb fraudulent results from bot-generated voting performed by hackers in exchange for payment.

"The quality of our service depends on real votes, by real people. Unfortunately, we discovered that bots were abusing our site, to the tune of tens of thousands of attacks getting through our system each day," said John Pålsson, creator of “Our team tested a number of spam filtering services, including reCAPTCHA, and found that none could stop the tide of bots—until we discovered FunCaptcha.”

Traditional CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart) rely on alphanumerical tests to filter website spammers. Over 300 million CAPTCHAs are solved each day—equating to more than a million hours in lost browsing time—and are increasingly hackable by Internet bots. Today’s artificial intelligence technology can solve even the most difficult variant of distorted text, at 99.8% accuracy, according to Google.

Thus, distorted text, on its own, is no longer a dependable test.

FunCaptcha replaces these standard tests with interactive, quick mini-games to more accurately detect real humans and stump bots in what its creators say is less than 10 seconds. It also says that in tests, FunCaptcha improves human user conversions by 20.4% when compared to conventional CAPTCHAs.

“The immediate results of incorporating FunCaptcha were staggering, and we’ve seen the program hold up against a number of new attacks by determined hackers,” continued Pålsson. “Today, we can confidently reward the servers players love most, rather than those who fork over the most cash to exploit the system through bot-based votes.”

SwipeAds is not the only company aiming to move to a post-CAPTCHA world. In December, Google announced that it had developed an advanced risk analysis back-end for CAPTCHA that actively considers a user’s entire engagement with the test—before, during, and after—to determine whether that user is a human.

In cases when the risk analysis engine can't confidently predict whether a user is a human or an abusive agent, it will prompt a CAPTCHA to elicit more cues, increasing the number of security checkpoints to confirm the user is valid.

What’s Hot on Infosecurity Magazine?