- TunnelBear VPN review: An affordable, easy-to-use VPN with few a few notable pitfalls
- I use this cheap Android tablet more than my iPad Pro - and it costs a fraction of the price
- One of my favorite budget tablets this year managed to be replace both my Kindle and iPad
- I tested DJI's palm-sized drone, and it captured things I had never seen before
- Critical Vulnerabilities Found in WordPress Plugins WPLMS and VibeBP
AI can now solve reCAPTCHA tests as accurately as you can
The time has come: Artificial intelligence (AI) can now solve reCAPTCHAv2 tests — those image identification quizzes that pop up as checkpoints during your browsing journey to verify you’re not a bot — and it can solve them as accurately as you can.
Researchers from ETH Zurich in Switzerland have trained an AI model to solve Google’s reCAPTCHAv2 image challenge. The researchers trained the model — named YOLO for “You Only Look Once” — on images of the usual reCAPTCHA fodder, meaning mostly road vehicles, traffic lights, and other related environmental objects.
Also: iOS 18 bug complaints abound online – here are the top glitches reported
The specific nature of the dataset allowed YOLO to catch on easily, and ultimately pass the tests 100% of the time. For context, the researchers noted, “previous attempts resulted in only “68 to 71%” of CAPTCHAs solved.
That score doesn’t mean the AI got every test right, but rather that it performed at a rate of accuracy that looks convincingly human every time.
“Our findings suggest that there is no significant difference in the number of challenges humans and bots must solve to pass the captchas in reCAPTCHAv2,” the report concludes.
While CAPTCHA — which stands for “Completely Automated Public Turing test to tell Computers and Humans Apart” — asks users to identify altered or disrupted letters and words, reCAPTCHA often asks users to identify and categorize images.
Also: LinkedIn is training AI with your personal data. Here’s how to stop it
Other types of reCAPTCHA tests use pulled-from-life photos of text, which are harder than actual text for computers to decipher; single checkbox questions asking the user to confirm they aren’t a robot; and invisible behavioral activity trackers that can determine personhood through dynamic data like click speed and cursor movement.
So what does this new AI research mean?
This is primarily a security concern for any site that relies on CAPTCHA and reCAPTCHA, which were created to stop spam, content scrapers, and other malicious actors. Although they were already fallible prior to YOLO’s benchmarks, CAPTCHAs are generally getting easier to crack given the sophistication of current AI models. Some think CAPTCHAs will simply have to get harder for people, which may exacerbate the tests’ existing accessibility concerns for the visually impaired.
Also: US Kaspersky customers startled by forced switch to ‘rando’ AV software
There are still other methods of distinguishing bot and human activity, though. Google is thought to use device fingerprinting, which captures software and hardware data that tags devices with unique identifiers, alongside tools like CAPTCHA. Apple’s Private Access Tokens, released with iOS 16, were also launched as a CAPTCHA alternative.
But those behind the security checks don’t seem too rattled by the development. “We have a very large focus on helping our customers protect their users without showing visual challenges, which is why we launched reCAPTCHA v3 in 2018,” a Google Cloud spokesperson told New Scientist. Referring to behavioral tracking methods like cursor movement, they added, “Today, the majority of reCAPTCHA’s protections across 7 [million] sites globally are now completely invisible.”