The migration of corporate IT infrastructure to the cloud is accelerating day by day. A survey by 451 Research projects that by 2020 about 60 percent of all enterprise workloads will be in the cloud. But migrating applications and data from a corporate data center to a cloud platform presents a significant challenge.

With the amounts of data being generated rising at exponential rates, the question of how to migrate data from a corporation’s data center to its new home in the cloud has become particularly pressing. According to Kevin Liebl, vice president of marketing at Zadara Storage, “Published reports cite that it would take 120 days to migrate 100TB of data using a dedicated 100Mbps connection.” Unless a data migration is well planned and well executed, it could cause significant disruptions to an organization’s workflows. As Rebecca Hennessy, marketing head at Experian Data Quality puts it, “Without a comprehensive data migration approach, any planned improvements for innovation, performance and growth, can be severely delayed, or worse, derailed completely.”

This is especially the case when attempting to migrate a production environment to the cloud. In such cases, extreme care must be taken to ensure that operations are not disrupted by the migration process. And that means choosing the right migration strategy for your company’s particular circumstances.

There are three major approaches to data migration: big bang, phased, and parallel. In planning a migration project, the first step is to determine which of these approaches will provide the best opportunity for a successful outcome. Let’s take a brief look at each one.

cloud migration button

The big bang approach: The migration is done all at once, often over a single weekend. When users log in at the beginning of the next week, they log into the new system, and the old one is completely offline. This avoids having to run both the old and new systems simultaneously. Since no production operations take place in the interval between shutting down the legacy system and bringing up the target system, the necessity of synchronizing the two systems is eliminated.

However, because of that interval during which both the old and new systems are necessarily offline, the big bang approach is only suitable for businesses that don’t require their systems to be online 24/7. Also, since there is a specific and limited time window for the changeover to be accomplished, any glitches that occur during the migration process could have a severe impact on the company’s operations if that window is exceeded.

Because of these exposures, the big bang approach is considered relatively high risk, and works best when the amount and complexity of data to be migrated are small.

The parallel (or parallel run) approach: The new system is installed alongside the old one, and both operate in tandem during the transition. Updates are posted to both systems until the migration is complete. Once it has been validated that the new system is functioning correctly, the old one is turned off.

The advantage of this approach is that current production is not disrupted, and migration issues can be fully dealt with before the target system takes over. This is the least risky of the three strategies because, in the event of problems with the new system, you can switch back to the legacy system.

The phased approach: Data is migrated in small increments over time, on a per-module, per-volume, or per-subsystem basis. As each increment is transferred to the target system, bugs can be worked out and any required user retraining accomplished in small chunks, rather than having to be done for the entire system all at once. The result is less risk than with a big bang migration, but with a much extended changeover time frame. Because of the longer time required to complete the migration, costs can be greater.

Good planning is at a premium with a phased migration, since dependencies between modules must be thoroughly mapped out in advance so that modules don’t become “orphans” in either the legacy or target systems.

storage arrays

How the Zadara Storage Cloud Facilitates Data Migration

Zadara is a STaaS (storage as a service) provider. Its VPSA Storage Array technology is connected to the facilities of all major public cloud platforms, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), as well as on-premises at customer sites. In line with the STaaS model, customers pay for storage services (not hardware) on a pay-as-you-go basis.

Zadara Storage has designed technology and processes that can make data migration easier, safer, and more cost effective. Although Zadara supports any of the three migration strategies, it is especially suited for phased migrations, which is the option most frequently used today. In fact, Dylan Jones, editor of Data Migration Pro, notes that their recent Data Migration Research Study indicates that 62 percent of migration projects use the phased approach.

Zadara inherently accommodates phased migrations because any two Zadara VPSA Storage Arrays can talk to one another, wherever either might be located in the world. That allows transparent remote mirroring between the two locations. With per volume mirroring, you can easily set up a process to migrate one or two volumes at a time without disrupting ongoing production.

If your company is considering moving workloads to the cloud, and you’d like to know more about how Zadara can facilitate your migration project, please download the ‘Zadara Storage Cloud’ whitepaper.