In mid-2012, in a blog titled Are We Human?, I described some of the benefits of using technologies such as CAPTCHA, reCAPTCHA, nuCAPTCHA and Are You A Human to help enterprises figure out whether their online application is dealing with an actual human user, or with another computer program. For example, they can help to:
- Block “comment spam” on blogs and other social media sites
- Protect the legitimacy of online polls and surveys
- Protect self-service password reset pages from automated attacks
- Prevent fraud and enforce business policies, such as blocking the automated ordering of large blocks of tickets to highly coveted events
- Maximize online revenue, by fast-tracking legitimate human users before they drop off the site in frustration, before completing their transaction
In addition, as a side benefit, these technologies are also designed to aggregate the human intelligence that is being used to solve millions of individual CAPTCHAs each day, to improve the accuracy of digitized text and images. Very clever.
At that time, I was especially interested in the emergence of adaptive, heuristic approaches that make use of behavioral analysis and a risk-based “scoring” of multiple contextual factors, to make a real-time determination of whether or not the application is dealing with an actual human – and to make an appropriate, risk-based response. If you don’t have a high enough level of assurance that you’re dealing with a legitimate human user, for example, you can present an additional authentication challenge … or decide to block the transaction. Although they are not nearly as prevalent in the enterprise setting, the fact is that approaches to authenticating users based on such risk-based / adaptive technologies are widely and effectively used in consumer-facing applications, and have been starting from nearly a decade ago.
Two and a half years later, Google has announced updates to its reCAPTCHA capabilities – which it refers to as “the No CAPTCHA reCAPTCHA experience”, in its video and blog – with pretty much these exact capabilities. I suppose this is exactly as should have been expected … i.e., the small companies develop new and innovative technologies, and the big companies acquire or duplicate the approaches that demonstrate results. My colleague, Jim Rapoza, commented on the new reCAPTCHA in his news article for InformationWeek.
To me, however, not much seems very different or newsworthy about this announcement – except for the following:
- The adaptive / risk-based technology is now endorsed, implemented and supported by Google – which means that we can expect it to be adopted much faster and more broadly than before.
- Google has flipped the primary question being asked on its head, from “are you a human?” to “are you a robot?” – and I have to say, I find this to be a brilliant blend of technology and psychology, to result in a more positive user experience. Before: we had to prove that we were worthy of completing a transaction, while dealing with the frustration of the friction introduced by the technology. After: we are partners in protecting our transaction from attackers, and we are only inconvenienced if something is out of the ordinary. But at all times, we get the positive message that something proactive is being done, for our benefit. Again, very clever.
As a postscript, I had always thought that Isaac Asimov’s Three Laws of Robotics – and their subsequently added prequel, the so-called Zeroth Law – were supposed to protect us humans from the robots. Unfortunately, the humans who build these particular robots haven’t subscribed to those standards.
For more on the topic of IT security, read the Aberdeen report Insider Threat: Three Activities to Worry About, Five Ways They’re Allowed to Happen – and What Enterprises Can Do About It
Image credit: Image by Chris Isherwood