Skip to content

Data Migration & Cutover

Data migration and cutover is the activity that moves data, configuration, and users from a legacy system to the new one on a defined window. It is distinct from infrastructure provisioning (which stands the new system up) and from deployment execution (which ships the code) — cutover is the choreographed transition where the legacy system is retired and the new system takes ownership of live data and live users at a named moment.

A useful framing has four phases:

  1. Migration design. What data needs to move, in what shape, by what mechanism, with what reconciliation. The artefact is a migration plan: source-to-target field mapping, transformation rules, data-quality assertions, the order in which entities are migrated to satisfy referential integrity, and the explicit list of data that is not migrating (archived, recreated, deferred to a later phase).
  2. Cutover rehearsal. One or more end-to-end dry runs of the cutover, executed against staging environments populated with anonymised production data. Rehearsals surface the issues the plan does not anticipate — slow migration scripts, blocking foreign-key constraints, integration endpoints that need re-pointing, post-migration jobs that depend on a state the migration left incomplete. Mature engagements run two or three rehearsals before going live; one is the minimum.
  3. Cutover execution. The named window during which the legacy system is frozen, data is migrated, integrations are re-pointed, the new system takes over, and smoke tests confirm the cutover succeeded. Run by a named cutover commander against a documented runbook with explicit go/no-go checkpoints. Either the cutover completes against its criteria, or the documented rollback fires.
  4. Post-cutover reconciliation and stabilisation. The hours-to-days window after cutover where data integrity is reconciled (record counts match, business-critical aggregates match, edge-case entities are spot-checked), residual issues are triaged, and the legacy system is decommissioned (or kept on read-only for a defined period as fallback evidence).

The output of cutover is a new system holding the truth, a legacy system that is either retired or in named read-only mode, a documented reconciliation showing data integrity, and a documented set of residual issues with named owners. Forward, the engagement enters hypercare under elevated response. Backward, cutover depends on the infrastructure plan having modelled the migration explicitly and the risk register having tracked migration risks throughout build.

Run two or three end-to-end cutover rehearsals before live. The single most reliable predictor of a clean cutover is the number of full rehearsals that preceded it. The first rehearsal almost always reveals issues — a script that times out at production scale, a foreign-key constraint that blocks the documented order, an integration endpoint that needs re-pointing in a place nobody documented. The second rehearsal validates the fixes. The third rehearsal builds the team’s confidence in the runbook timings. Engagements that rehearse once and “expect to fix forward on the day” produce cutovers that overrun the window and stress the rollback decision under time pressure.

Define the cutover window with explicit start, freeze, target-completion, and hard-rollback times. A cutover window has four named timestamps: when the legacy freeze starts, when the migration begins, when the new system goes live, and when the cutover must be complete or rollback fires. The hard-rollback time is documented and pre-agreed with the client — typically 4–8 hours after migration start for mid-sized engagements, longer for platform replacements with terabyte-scale data. The cutover commander makes the rollback decision against the clock, not against stakeholder optimism. Engagements without a hard-rollback time discover at hour 12 that nobody is empowered to call the rollback and the engagement is now committed to a partially-migrated state.

Author the runbook as an executable script, not as prose. The cutover runbook is a sequence of named steps with: the exact command or click, the expected outcome, the expected duration, the verification check, the named owner of the step, and the documented action if the step fails. Prose runbooks (“first migrate the customers, then the orders…”) produce cutovers where the team improvises specifics under pressure; executable runbooks let the cutover commander track progress against an external reference and catch deviations early. Where possible, the runbook is literal scripts the team runs, not steps the team types.

Migrate in dependency order, not in alphabetical or “easiest first” order. Reference data (countries, currencies, taxonomies) before transactional data (orders, invoices) before user-state data (sessions, preferences). The order is dictated by foreign-key dependencies, not by team comfort. Engagements that migrate “the easy tables first” discover they need to undo the easy migrations to migrate the harder ones in the right order — and the rehearsal is when this discovery should happen, not the live cutover.

Reconcile data integrity post-cutover with named, automated checks. The post-cutover reconciliation has named verifications: record counts for every migrated entity match between source and target (within documented tolerance for in-flight transactions), business-critical aggregates match (total invoiced amounts, total user counts by status), spot-check sampling on edge cases (long strings, special characters, nullable fields, time-zone-sensitive data). The reconciliation is automated where possible — a reconciliation report runs at cutover-end and produces a named pass/fail. Manual eyeball reconciliation produces “looks fine to me” sign-offs that come unstuck when a data-quality issue surfaces a week later.

Communicate the cutover window to every affected stakeholder. Internal team, client team, named integration partners, end users (where applicable), customer support, marketing, anyone who runs scheduled jobs against the legacy system. The communication includes start time, expected duration, expected user impact, and contacts for issues. Surprise cutovers that take down a system users are actively working in produce damage no engagement can fully recover from. The communication pattern mirrors the launch communication discipline.

Pick the migration topology explicitly at design time, not under launch-day pressure. Three named topologies cover most engagements. A parallel-run cutover keeps the legacy system writable for a defined period after the new system goes live, with both systems running and the team reconciling differences — cost: higher complexity and dual-write infrastructure; benefit: rollback is trivially “switch users back to legacy.” A big-bang cutover freezes the legacy system at the cutover moment and commits all users to the new system — cost: a botched migration is a crisis; benefit: simpler architecture, faster total cutover. A phased cutover migrates users in named cohorts (region 1, then region 2; small clients, then large clients), validating each cohort before the next runs — cost: longest calendar duration; benefit: lowest per-cohort risk and per-cohort rollback. The choice depends on the cost of being wrong, the underlying systems’ capability for dual-write or cohort-routing, and the engagement’s calendar tolerance. Financial systems and engagements with low risk tolerance run parallel; greenfield rebuilds and engagements with appetite for clean-cut launches run big-bang; B2B platform replacements and multi-region rollouts run phased.

By the end of cutover, the engagement has:

  • A documented migration plan covering source-to-target field mapping, transformation rules, dependency order, data-quality assertions, and the explicit list of data not migrating
  • A signed cutover runbook with named timestamps (freeze, migration start, target completion, hard rollback), named commander, and named step owners
  • At least one — preferably two or three — completed cutover rehearsal, with documented findings and runbook revisions
  • A documented rollback plan tested in at least one rehearsal, with the criteria that trigger rollback explicitly named
  • A live production system holding the truth, with the legacy system retired or in named read-only mode for a defined post-cutover period
  • A signed reconciliation report showing record counts, business-critical aggregates, and edge-case spot checks within tolerance — or documented exceptions with named owners and resolution plans
  • A handoff record for residual cutover issues into hypercare — what is open, who owns it, what the resolution path is
  • Stakeholder communication archived: who was notified when, what they were told, what acknowledgements came back

Big-bang vs. parallel-run vs. phased cutover cultures. Big-bang agencies cut over all users at one named moment, retire the legacy system, and rely on rollback if things go wrong. Trade-off: simplest architecture, fastest total cutover, highest risk if migration fails — common in greenfield rebuilds, marketing-site relaunches, and engagements where the legacy system has minimal active state. Parallel-run agencies keep both systems live for a defined period (days to weeks) after cutover, dual-writing or reconciling, with rollback trivial because users can be flipped back to legacy. Trade-off: highest complexity, longest total cutover duration, but safest — common in financial, healthcare, and other regulated engagements where being wrong is unacceptably expensive. Phased-cutover agencies migrate users in named cohorts (region 1, then region 2; small clients, then large clients) with each cohort validated before the next runs. Trade-off: middle-ground complexity, longest calendar duration, lowest per-cohort risk — common in B2B platform replacements and multi-region rollouts. Most agencies pick per-engagement based on the cost of being wrong and the underlying systems’ capability.

Cutover-as-event vs. cutover-as-deployment cultures. Cutover-as-event agencies treat cutover as a discrete, ceremonial activity with a named war room, dedicated calendars, blocked engineer time, and a debrief afterwards. Trade-off: high coordination cost, strong execution discipline, clear team and client artefact. Cutover-as-deployment agencies fold cutover into normal continuous-deployment infrastructure — feature-flag the new system, gradually shift traffic, monitor, complete. Trade-off: lower ceremony, requires excellent feature-flag and traffic-routing infrastructure, often invisible to the client. Cutover-as-event dominates in agencies serving traditional enterprise clients with named launch dates and in regulated work where the cutover moment is itself a compliance artefact; cutover-as-deployment dominates in modern product-engineering and SaaS rebuilds where the underlying platform supports it.

In-house migration vs. specialist-migration-vendor cultures. In-house agencies run the migration themselves — agency engineers author the migration scripts, the agency runs the rehearsals, the agency commands the cutover. Trade-off: complete control, full understanding of the migrated data, the cost is on the agency’s plate. Specialist-vendor agencies bring in a data-migration specialist for the migration design and execution, with the agency’s engineers in support. Trade-off: faster execution on novel migrations (especially platform-specific data shapes — Salesforce migrations, SAP migrations, Shopify-to-X migrations), expensive on rates, weaker handoff because the migration knowledge sits with the vendor not the agency. In-house dominates in modern web and SaaS engagements; specialist-vendor survives in legacy-platform replacements where the source system’s quirks would consume more agency time than the vendor’s fees.