Over the years, I have done a lot of work related to backups and various other forms of disaster recovery. In doing so, I have found that there are a number of fundamental concepts that seem to hold true across organizations of nearly every size. This article discusses five such concepts.
Over the years, I have done a lot of work related to backups and various other forms of disaster recovery. In doing so, I have found that there are a number of fundamental concepts that seem to hold true across organizations of nearly every size. This article discusses five such concepts.
1.You Need At Least Three Copies of Your Data
The first bit of advice that I give anyone who asks about backups is that in order to be completely protected, you need at least three copies of your data. The first copy is your live data. Live data is the data that is actively in play on your network, and is used on a day-to-day basis.
The second copy of your data should be a local backup. It is extremely important to have a backup that is easily accessible, and that resides within your own data center.
The third copy should be a remote backup. Ideally, this should be a backup either to a public cloud or to a remote data center. In a pinch the requirement for a third copy of the data can be fulfilled by creating a tape based backup and shipping those tapes to a safe location. Regardless of the method however, a remote backup is the only way to protect your data against a data center level disaster.
2.Backups Are Only as Good as Their Recoverability
Lesson number two is that a backup is only as good as its recoverability. We’ve all heard stories about organizations that for whatever reason were unable to restore a backup at a time when the backup was desperately needed. These types of stories usually result from a technical glitch in the way that the backup software was configured. For example, I was once called to assist an organization that needed to restore a backup of their Exchange Server. The problem was that the organization had upgraded to a newer version of Exchange without updating their backup software in the process. Consequently, the backup software did not properly protect that version of Exchange Server.
Technical glitches are not the only reason why a backup might not be recoverable when it is needed. I have also seen organizations that have invested in high-end backup solutions only to discover that the software was complex and that nobody in the organization really knew how to perform a restoration.
3.Even a Good Backup Can Result in Substantial Down Time
Lesson number three is something that is sometimes easy to overlook. That lesson is that even a best case recovery can result in substantial downtime. Imagine for example that your data center is struck by lightning, resulting in severe hardware damage and complete data loss. How long is it going to take the organization to recover from that type of disaster?
Unless the organization has a closet full of spare hardware, acquiring new servers will likely take some time. Even if you take the hardware out of the equation however, it can still take a considerable amount of time to restore all of the data, even if everything goes perfectly with the restoration. As such, mission-critical systems would remain off-line until the restoration process completes.
Several of the newer backup applications offer instant recovery capabilities that are based on the use of differencing disk snapshots. These types of backup applications allow mission-critical servers to be immediately brought back online, and to be used while the recovery process occurs in the background. It is worth noting however, that although the data and applications might remain available throughout the recovery process, performance is often impacted. There is also a potential for such systems to fail if the data loss event were to impact the entire data center.
4.Backups are Not Set it and Forget it
Lesson number four is that backups are not set it and forget it. Sure, backups need to be monitored and periodically tested, but there is more to it than that. Your data protection needs are almost certain to change over time. Backup systems must be periodically evaluated to make sure that they are providing the required level of protection.
To give you a more concrete example, imagine what would happen if you added a new virtual machine that ran a mission-critical line of business application. Would your backup software automatically detect the new virtual machine and back it up? Depending upon how your backups are configured it might, but this is far from being a guarantee. Reviewing the backup logs and periodically performing recovery tests are the only ways of guaranteeing that you are truly protecting the systems that you think you are protecting.
5.Backup Reporting is More Important than Ever Before
Lesson number five is that backup reporting is more important than it’s ever been. I’ve already talked about the importance of periodically reviewing backup logs, but there is another reason why backup reporting has become ever more important.
In the not-too-distant past, data tended to be centralized. Most of an organization’s important data existed inside of the data center, where it could be centrally backed up with relative ease. Today this is simply not the case. Although data still exists within the data center, it may also be scattered across various mobile devices, cloud services, and other locations.
Depending upon the locations and types of data that you need to protect, you may discover that there is not one single backup application that can adequately protect everything. You may have to resort to using multiple backup applications. In these types of situations, it is a good idea to invest in a backup reporting tool that can monitor the various backup applications and compile a centralized report, while also monitoring for any problems that may exist.
VLCM works with VMware and other platinum partners to provide disaster recovery services (DRaaS) through the cloud. View resources to learn more about vCloud Air, below. Also download a whitepaper outining the Top Five Reason For Disaster Recovery in the Cloud.
About Brien Posey
Brien Posey is a freelance technical writer who has received Microsoft's MVP award six times for his work with Exchange Server, Windows Server, IIS, and File Systems Storage. Brien has written or contributed to about three dozen books, and has written well over 4,000 technical articles and white papers for a variety of printed publications and Web sites. In addition to his writing, Brien routinely speaks at IT conferences and is involved in a wide variety of other technology related projects. Prior to going freelance, Brien served as CIO for a national chain of hospitals and healthcare companies. He has also served as a Network Administrator for the Department of Defense at Fort Knox, and for some of the nation's largest insurance companies.