A recent article in The Economist talks about a plausible attack on the financial system: If financial systems were hacked: Joker in the pack. I liked this article, although I think it was a little naïve in two ways.
Firstly it wasn’t clear enough that the ‘recover from a serious incident in two hours’ claim is fantasy. Of course everyone would like to be able to do that and will state to regulators that they can do so, and perhaps some people in the organisations concerned really believe that they can do so. And there are mechanisms in place (DR systems, business continuity volumes and so on) which, for a suitably nice incident, will indeed allow very rapid recovery if everyone is on the ball. But for the sort of incidents described in the article — for instance an incident where you don’t trust your data and soon realise that all your backups for some unknown but long interval are also suspect — the recovery time is likely to be much longer than two hours. Indeed, the important question would be whether recovery is possible at all. There have been much smaller incidents, not caused by malice, where complete recovery was never achieved in the sense that some transactions were lost altogether: there is no reason to assume that full recovery is even possible from a really major attack.
Secondly and more seriously the article perpetrates the myth of ‘state sponsored actors’: the assumption being that only with the resources of a state would such an attack be possible, and since even malignant states have no interest in this kind of chaos these attacks are not a real worry. This is a touchingly 1950s view: although everyone knows how to make, say, a fission weapon, to actually make one you need to be able to mine huge quantities of ore, run vast numbers of centrifuges and so on, and do this secretly and securely, and only states have that kind of ability. The argument seems to be that breakng into computer systems is somehow a similarly industrial enterprise: perhaps you need vast caverns with serried ranks of hacker drones, relentlessly typing billions of lines of code or something, or enormous super-powerful computers to brute-force encryption. Well, of course, you don’t: you need a small number (possibly one) of sufficiently motivated people with the right skills who can find and exploit a weakness — probably a human weakness — in the system rather than launching the primitive industrial-scale brute-force attack that seems to be what the article imagines. And while states may not be interested in chaos, these tiny groups may well be.
In summary: it’s a good article but it understates the consequences of such attacks, and misrepresents the likely attackers in a way which makes such attacks seem much less plausible.
I hope that these confusions exist only in the minds of journalists, but I fear that the people actually responsible for the security of financial infrastructure also believe them, or at least pretend to do so as such beliefs are very convenient. I have certainly heard both myths repeated by people who ought to know better.
This is derived from a comment I made on an article in Bruce Shneier’s blog, in turn based on some personal experience in the financial services industry.