Security by obscurity is bad enough, but security by unusability is worse. Not only does it frustrate customers and potentially drive them away to friendlier rivals, but if users do stay — perhaps because they are employees – it can make your systems less secure, not more. That’s because users will find ways to circumvent security simply to get their jobs done.
While security by unusability is a risk for any organization, it is a notable plague among banks and the like. In their attempts to allay fears around cybercrime and identity theft, many banks in Europe — where chip & PIN smartcards are commonplace — have gone to extremes. They require customers to install bloated and crash-prone security software for instance, or carry around a passcode-generator, or enter multiple codes across multiple devices.
All of these things probably seem reasonable when you are a security developer whose PC is not used for much else, or when you only need to use them every now and then. But for the rest of us, who have more and better things to do, and whose PC might need to deal with several different banks, they are a sign that the bank — or perhaps its security team — is more concerned with evading liability than it is with our customer experience.
For example, my bank brought in passcode-generators. These resemble a calculator, with a slot for my smartcard and a pad for my PIN, and must be used not only to set up new payments, which would be fair enough, but also to pay existing verified arrangements. I’m now safely protected against evil criminal schemes where a perp hacks into my account and pays my taxes and credit card bills for me. Security theater at its finest.
I no longer use that bank account. Also gone is a bank that required me to enter a long numeric user ID plus a numeric passcode, and blocked the web browser’s auto-fill feature, all of which pretty much guaranteed that I had to write those numbers down — and therefore make myself rather than the bank liable for any security breach. Similarly, a friend is considering a move after his bank changed its systems, bringing him a new login process:
- Enter Windows password on computer. Navigate to bank website
- Get prompted for answer to secret question
- Enter password manager and retrieve answer to secret question
- Get prompted for new Digital Secure Key
- Find iPhone. Enter iPhone PIN. Fire up banking app
- Get prompted for Digital Secure Key password
- Go back to my PC to remind myself what the Digital Secure Key password was
- Return to iPhone. Re-enter iPhone PIN because it has locked
- Enter Digital Secure Key password on iPhone
- Get an eight-digit number back
- Enter this on my PC
This is perhaps an extreme example — my friend, being a security geek, had generated a non-memorable 32-bit string of random characters as his ‘secret answer’, for example. But is it safer that he will now have to switch to weaker passwords, just to make the system usable?
Another friend is so fed up with his company’s bank pushing him to install its security software that he too is planning a change of bank. Sure, he could install it in a virtual machine — it is notorious for conflicting with other apps — but he wants to be able to bank from anywhere, not be tied to a particular PC.
Let’s be generous, and assume that all these devices really are intended for our benefit, and not merely the bank’s. In that case, all these developers seem to have forgotten that the basic tenet is that good security is easy security, or at least, easy-to-use security. Ideally it should not even be visible to the user most of the time, although this might be asking a bit too much in banking.
So when you are discussing different security mechanisms, whatever kind of organization you work for, ask yourself a few questions: who and what are we really protecting here? What are the real risks? Is it likely to win or lose us customers? And if I had to do this 10, 20 or even 100 times a day, what security-breaching lengths would I go to in order to make it easier for myself?