The incident is actually both serious and significant in its implications for the rest of us, but it's all too easy for an Australian news report to get lost in the hubbub that is international security news.
Which may explain why Microsoft's recent and highly publicized survey on cold-calling support desk scams, though extremely interesting, missed a couple of pertinent facts, namely that (a) Australia is a hotspot for that kind of scam – and I'm now starting to see reports of similar incidents in countries where English is not the native or primary language – and (b) it's unsafe to assume that all cold-calling about security problems has to be a spam, when legislative trends in Australia and elsewhere actually legitimize intervention when a system is identified as compromised. But that's a topic I'll undoubtedly return to again.
While the Microsoft survey indicates some substantial financial losses to individuals as a result of the cold-calling scams (well into four figures, in some cases), it's still only money. Business data may be harder to put a dollar figure on, but its loss can mean the difference between viability and collapse for a business, large or small.
So for 4,800 customers of Distribute.IT, an Australian domain registrar and web hosting provider, the news that data, sites and emails on four of its servers, named Drought, Hurricane, Blizzard and Cyclone -- I guess they were doomed from the start -- are almost certainly “unrecoverable” may be very bad news indeed.
The damage was done as a result of an attack the previous week, and caused customers to offer comments, such as “This new outage has probably killed my business" and "My business can't sustain any more downtime," even before the company disclosed the extent of the damage.
I sometimes use as a horrible example the experience of a friend of mine whose reputation was somewhat blotted as a result of an incident where every PC in the unit was backed up to a server, but the server itself wasn't backed up.
It seems that Distribute.IT has had to learn the same lesson the hard way, and its now facing questions apparently as to why it wasn't using off-site backups.
A fair question, but there's a lot more to disaster recovery than off-site backups. If there's one thing this incident makes clear, it's that when you outsource business assets like your website and mail server to a relatively small provider, you can't automatically expect the same standards of hardware and network redundancy, data protection and backup that you might expect from a full-blown disaster recovery/business continuity provider with hot sites and contracted service level agreements (SLAs).
But you probably do expect some provision for backup.
Still, I don't think that's a sufficient reason not to bother with your own backups. I nearly said something snarky here about remote backup facilities that don't always take the best care of their own authentication mechanisms.
However, rather than looking for the nonexistent provider who never makes a mistake, there's something to be said for looking at how well they handle and learn from their public mistakes. Unless you're a large organization with the ability to tie down services beyond your own perimeter to contractual detail, you may never know how fluffy a provider's security is until there's a rainstorm.