I sometimes find myself slightly irrated by having to prove that I am a human online, most often by using the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHAs).
You've all had to use them at some point: those funny, distorted versions of a piece of text that only a human can decipher.
I content myself with the fact that these do provide a degree of security against web bots, and that maybe my irritation is more to do with my eyesight than anything else. And, so, I have lived with this sense of security for some time.
I then noticed that CAPTCHAs were being used by some financial institutions, in some cases as part of a transaction verification process, which I assumed was again for protection against automated attack. Although not common in the UK, various banks in countries such as US, Germany, China and Switzerland make extensive use of CAPTCHAs.
And, this is not just a few banks. These are potentially used by over a hundred million customers. The use in eBanking made me wonder just how vulnerable they were to attack: is it only a human that can decipher them. Would a computer not be just as good as a myopic Professor?
I was shocked when my friend Dr Shujun Li told me that not only were CAPTCHAs vulnerable but that he had demonstrated how you could successfully attack nearly 100% of those he had found in eBanking.
So, how does the attack work? By combining a series of image and text processing techniques that have been know for a long time.
- Segment objects from a CAPTCHA image
- Image processing for removing noises/decoy objects and refining shapes of segmentation
- Detect random grid lines used in some e-banking CAPTCHA schemes
- Image inpainting for removing unwanted objects from a CAPTCHA image
- Character segmentation for extracting characters from a CAPTCHA image
Hence, having recovered the text from the CAPTCHA presented as, say, part of a transaction verification process, a piece of malware could simply transfer that text to the required section of the banks web page and thus appear to be a valid human.
Obviously this form of forging CAPTCHAs has to be used in particular attack scenarios but these were tried and succeeded. A piece of malware would have conduct a man-in-the-middle attack, which are still quite rare but gaining a foothold. For a full description of the attack scenarios go read the paper.
The attack was tested against a large range of eBanking systems and the results show quite conclusively that effectively 100% could be compromised. When this attack was run using a standard laptop one attack scenario took only 150ms to complete successfully.
Hence, a computer cannot only do what a myopic Professor can, it can do it a lot faster. Imagine this in an automated mass-attack. Imagine a trojan that was on machines in, for example, China where it stole a few pennies from millions of customers.
Imagine what the bank's response might be. Might the bank not be tempted to say that it must have been a human that was involved and hence you must have revealed your password. After all, it can't have been a machine because of the CAPTCHA process.
But, the thing that disturbs me a lot more is that Dr Li did this research some considerable time ago, and it was published, and he told those affected, yet those very same financial institutions continue to use this method.
I don't intend to say who those institutions are here, but if you are such an institution please take note of this research and move to something like hardware/two-part authentication. Nothing is perfect, but it is clear that CAPTCHAs are not the answer.
Cross-posted from Professor Alan Woodward