What a lot of fuss RSA’s security breach has caused! And what a lot of fear and uncertainty and doubt still surrounds it!
In case you haven’t been following the story, it began in mid-May 2011, when RSA admitted that its security had been breached and that “certain information [was] extracted from RSA’s systems.” Some of that information was specifically related to RSA’s SecurID products; the CEO admitted that “this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation.”
The CEO, Arthur Coviello, also assured everybody that the company was “very actively communicating this situation to RSA customers.”
I thought this was a good start, even though it raised more questions than it answered.
An admission and an apology go a long way – provided that they are quickly followed by genuinely useful information which explains how the problem arose, what holes it introduced, how those holes can be closed, and what is being done to prevent anything like it from happening again.
But RSA’s version of “very actively communicating” with its customers didn’t go that way. We still don’t really know what happened. We don’t know what holes were opened up because of the attack. And RSA customers still can’t work out for themselves what sort of risk they’re up against. They have to assume the worst.
What we do know is that US engineering giant Lockheed Martin subsequently suffered an attempted breakin. Lockheed stated that the data stolen from RSA was a “contributing factor” to its own attack, and RSA’s Coviello agreed:
[O]n Thursday, June 2, 2011, we were able to confirm that information taken from RSA in March had been used as an element of an attempted broader attack on Lockheed Martin, a major U.S. government defense contractor. Lockheed Martin has stated that this attack was thwarted.
Additionally, as I reported yesterday, RSA is offering to replace SecurID tokens for at least some of its customers.
What’s fanning the flames in the technosphere is this: why would replacing your existing tokens with more of the same from RSA make any difference?
Because RSA has offered to replace tokens, speculation seems to be that the crooks who broke into RSA got away with a database linking tokens to customers in such a way that tokens for each company could be cloned. With that database, an attacker would only need to work out which employee had which token in order to produce the right “secret number” sequence.
That, the theory goes, lets you mount an effective attack. It goes something like this.
To tie a token to a user, use a keylogger to grab one or more of the user’s token codes, along with his username, network password, and token PIN. (The token PIN is essentially a password for the token itself.)
You can’t reuse the token code, of course – that’s why the person you’re attacking chose to use tokens in the first place – but you can use it to match the keylogged user with a token number sequence in your batch of cloned customer tokens.
So you now have a soft-clone of the user’s token. And, thanks to the keylogger, you have their username, password and PIN. Bingo. Remote login.
I don’t accept this speculation as complete.
Even if it was the method used in the Lockheed attack, why would I accept that it’s a sufficient explanation? And even if it were, why would I accept – in the absence of any other information from RSA – that the same thing won’t happen again? Are they now offering to stop retaining data which makes it possible for an intruder into their network to compromise mine? Why would they insist on doing that anyway?
More confusingly, if the only practicable attack requires an attacker to keylog the PIN of a user’s token, why is the entire SecurID product range considered at risk?
RSA sells tokens in which the PIN is entered on the token itself, which is equipped with a tiny keypad. Those PINs can’t be keylogged.
So why isn’t RSA stating that its more upmarket tokens are safe? Users of those devices could immediately relax. Or is RSA unwilling to make those claims because there are other potential attacks against its devices which might be mounted by attackers equipped with the stolen data?
Perhaps this token-to-customer mapping database theory is a red herring? After all, there might be other trade secrets the attackers made off with which would facilitate other sorts of attack.
For example, a cryptanalytical report might show how to clone tokens without any customer-specific data. Or confidential engineering information might suggest how to extract cryptographic secrets from tokens without triggering any tamper-protection, allowing them to be cloned with just brief physical access.
In short, the situation is confused because RSA hasn’t attempted to remove our confusion.
It’s no good having mandatory data breach disclosure laws if all they teach us is to admit we had a breach. We also need to convey information of obvious practical value to all affected parties. I’ll repeat my earlier list again. When disclosing breaches, we need to explain:
* How the problem arose.
* What holes it introduced. (And what it did not.)
* How those holes can be closed.
* What is being done to prevent it from happening again.
Three words. Promptness. Clarity. Openness.
by Paul Ducklin on June 8, 2011