Blog

A Taste of Disaster was Enough for Me

A Taste of Disaster was Enough for Me

AUTHORED BY MARK JOHNSON, VICE PRESIDENT, MANAGED SERVICES @ GUIDEIT

In the IT services business when we think about Disaster Recovery (DR) it’s almost always in the context of business infrastructure and applications – for obvious reasons.  But recently I experienced one of the strangest coincidences concerning a form of DR that it is definitely time again to add to the GuideIT blog.

It started with my drive home and right in the middle of a chat with my wife the cell service cuts out.  Dropped calls are not unusual of course, but this was a full outage.  Annoying yes, but concerning?  Not really.  Then I arrive home and there’s no cable, no phone, no internet.  Again, not all that usual, but paired with the cell outage it soon dawned on me that we were cut off from communicating with the outside world.  That’s literally the first time that’s ever happened to us in the internet/cell phone age.  Turns out there was a major regional carrier outage, one that even affected the ATMs in the area.  No big deal all in all, grab a book and relax right?  But it does make one think…

The next day rolls around and I’m scrolling through my Twitter feed and start to see angry Tweets directed at my bank/auto insurer.  Now this particular institution is truly renowned for their IT organization, frequently earning accolades for their innovation, processes, leadership etc.  But something was clearly amiss.  People were Tweeting that they could not access their funds – no cash from ATMs, no ability to use their debit card.  Issues ranging from the serious, like people in pharmacies waiting on prescriptions but couldn’t pay, to stranded travelers, to the entertaining – most notably a college football fan who raged over and over on Twitter “I just want my Papa Johns!” The institution in question handled the event poorly from a social media aspect in my opinion.  Aside from not communicating nearly enough, they blamed the incident on a “system upgrade” that took longer than expected, which we all know simply can’t happen in a production banking environment, with some members calling them out on it.  Whether that’s the real reason, who knows?  I’ve yet to receive any form of member communication that this event even happened, why, and what they’re doing to ensure it never happens again.

But this isn’t about a terribly embarrassing system outage.  For me it was that in the space of 24 hours I realized I am way too reliant on electronic communications with no family disaster plan, and that I am tied to a single financial institution, with no ready access to cash.  I’ll be addressing both of these.  And then there is the business aspect of these “series of unfortunate events”.  We often assume that in a disaster recovery scenario we simply have to execute our DR plan, restore connectivity and data and things will be back to some semblance of normal.  These two incidents drove home for me something we know intuitively but often don’t emphasize – that we simply can’t stop at DR.  Rather, that we have to put real thought and real effort into true Business Continuity (BC) as well, something that is often glossed over in many respects.

In a strange way that I can’t quite put my finger on, disaster recovery without real business continuity seems an awful lot like that college football fan holding a working cell phone who simply wants his Papa Johns.

News & Insights