Asynchronous vs synchronous. Dark disaster recovery vs. active architecture. Active/active vs. active/passive. No setup is objectively better or worse than another. The best one for you primarily depends on your level of tolerance for what happens when the server goes down.
Security experts say how individual companies choose to save their data in anticipation of an outage depends on how long they can survive before the “lights” are turned back on. What level of availability does your company need? Is the face of your company an ecommerce site where even a few minutes offline can cost an astronomical sum? Will the cost of an active-active system outweigh the potential loss of business from an outage?
“It isn’t about one being more efficient than the other. More to the point of what needs are you trying to solve for. For example, buying a Ferrari to get groceries will get the job done, but is it really fit for purpose?” says Don Foster, senior director of solutions marketing and technical alliances at Commvault.
In an active/active architecture, typically a cluster of offsite servers are synchronized with the onsite server. This allows for there to be no downtime in the event of a disaster where one server is knocked offline. It can be configured to automatically failover. In this setup, less hardware is needed because all the systems across both sites are being used vs. only half the hardware in a dark disaster-recovery scenario. If you had 48 cores of dark disaster recovery, you’d have 96 total cores and use only 48. In active/active mode, you scale back to 32 x 2, for 64 cores, and all 64 are active.
In a dark disaster recovery scenario, capacity is an entirely redundant system – all the hardware and software ready to go – but sitting completely idle. That capacity is not used at all until the first site fails, but it is replicated to at certain periods.
Erin Swike, senior cloud solution architect at Bluelock, explains that “active/active disaster recovery is the unicorn of the DR world. The idea of being able to sleep at night knowing that, should your production site fail, your DR site will automatically start serving up applications to users without a single packet lost or moment of downtown, is the nirvana of any CIO or system engineer.
"For most, it remains the thing of fairytales and legends. Forget about the obvious factor of data center proximity and network latency; one of the most important factors is whether your applications are written to support this type of scenario. Unless an application was written with this in mind from the beginning, odds are that it can’t support it,” she said.
Sign up for MIS Asia eNewsletters.