It’s gotten increasingly harder — weirder — to prove you’re not a robot.
Why? AI trained on the Captchas of yore are able to pass them, per Insider, meaning it’s a constant battle to find a Captcha a human can solve, but a bot can’t.
… Alan Turing created the Turing test. A human evaluator sent the same questions to both a human and a machine, then guessed which answers came from the machine. A machine that tricks evaluators often enough has “passed” the test — but so far, none has.
In 2000, Luis von Ahn — now co-founder and CEO of Duolingo — was inspired by Yahoo’s problem with spammers using bots to sign up for millions of free accounts.
He and mentor Manuel Blum invented the Completely Automated Public Turing test to tell Computers and Humans Apart (Captcha), which required users to decipher distorted letters and numbers. Humans could, bots couldn’t.
But guess what?
Optical character recognition (OCR), which helps digitize text, taught bots to read wonky text. By 2014, Google’s AI could solve text Captchas 99.8% of the time, while humans could only do it 33% of the time.
Captchas evolved, asking users to identify particular images or audio, but AI is getting good at that, too.
And as AI trains itself to generate Captchas, things have gotten weirder. Discord users noticed Captchas full of AI-generated, Cronenberg-esque objects asking them to ID stuff that isn’t even real — e.g., a snail-yoyo thing called a “Yoko,” per Motherboard.
The “I am not a robot” button uses behavior leading up to clicking the button — e.g., browser history, mouse movement — to gauge humanity. Future tech could build on that to create a state of constant surveillance.
But ultimately, as bots get smarter, Captchas will always be lucky to be one step ahead.
Get the 5-minute roundup you’ll actually read in your inbox
Business and tech news in 5 minutes or less