Even after rectifying the flaws in its mechanism in 2018, Google’s reCAPTCHA is still vulnerable to be exploited by bots to bypass authentication. A proof-of-concept for this updated exploitation was posted by a researcher, who used tools to extract the audio file of reCAPTCHA and submit it to Google’s Speech-to-text API to bypass the authentication.
Exploiting Legitimate Tools to Bypass Authentication
To avoid fake account creations and block malicious traffic, an authentication system called CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) was invented in 2014. As the name stands, it would differentiate humans and robots by offering a logical challenge to solve, before accessing the site.
An upgraded version of this service called reCAPTCHA was acquired by Google in 2009 and is now used by hundreds of thousands of websites. While it’s meant to be for good use, researchers at the University of Maryland has published new research called “unCaptcha” in April 2017, where they successfully bluffed this authentication protocol.
The reCAPTCHA has an audio mode that is meant for visually impaired users, to listen and crack the logical question posed for them. Researchers here have targeted this, as they used tools like Selenium for downloading the audio sample from reCAPTCHA’s audio function, feed it to Google’s Speech-to-Text API, and obtain the accurate result.
This method was so successful that, it gave 85% of accurate results, and even promoted Google to upgrade their reCAPTCHA in June 2018, to be more secure in detecting the bots. Yet, the same researchers have shown up again to exploit the updated one, with a better accuracy score of 91%!
Now, a researcher named Nikolai Tschacher has disclosed his work with a
proof-of-concept that, this exploitation is still workable. Even after three years in existence, Google fails to patch it properly, as now the accuracy levels have gone up to 97%.