Lessons in Testing Same-Same, Just Different Projects

Three key lessons for testing enterprise lift & shift programs. It's same-same to classic testing, yet very different too.

Lessons in Testing Same-Same, Just Different Projects
Photo by Samurai Stitch / Unsplash

One type of project delivery I keep coming back to are IT transition projects. A system is running full stack in Operations Center A, but for business reasons (usually a tender), the owner C has selected to transition to vendor B. Some refer this exercise as "Lift & Shift" as the solution is lifted (ideally) without any changes. Similar tricky projects deals with carve-out and merger/acquisitions.

For transition, though, it's looking at the same-same system with no code changes, just located in a new data center and with a new delivery team. This includes for vendor A to hand-over source code, server specs and system snapshots for Vendor B to build, maintain and confirm.

What I usually experience is a regime of testing activities in the new setup B is building - this requires testers to broadening the scope of the System under test. It includes all environments usually at least five (Prod, Pre-prod, Dev and test environments). Some of the typical testing activities are:

  • Infrastructure construction testing including OS, network, etc.
  • Middleware, webservers, databases, brokers and integration solutions
  • Front end applications: can we deploy the actual webapps and UI
  • Regression and functional testing within the environment
  • Integration towards third parties and other internal systems at C
  • Penetration and security testing
  • Stress, Load and performance-testing
  • Data migration testing

While the classic testing themes appear during regression and functional testing, there are no new features or functionality. The viewpoint is not if the stuff works, it probably did when built by vendor A - it's how the stuff works in the new location.

Same-Same, Just-Different testing: This is mainly on an enterprise or organizational level. Previously, this testing theme would have been about data migration, but it had a more significant theme. It’s where a structural change happens in the company. An example could be migrating from on-premise Outlook to Office 365 in the cloud. It could also be migrating from an old homegrown application with heavy maintenance to a newer standard solution. Same-Same—just different.
Leading Testing Activities
Be able to take the first steps to lead testing Know about when testing happens Know about what kind of testing could happen Understand the deliverables in test leadership roles

So roughly 25+ individually complex testing activities each needing scope alignment, actual testing work (by some experts) and relevant reporting. This requires a staff-level experienced testing professional, that can be the testing activity lead, or as we usually call it in EU consultancy shops: Test Manager.

I have done this type of project six times since 2016, it's not your usual run-of-the-mill agile software delivery. One challenge is that the test regime is established in the contract by the owner C. While it's their system and they probably know the system parts the best, the migration strategy and test regime might have flaws that an experienced transition test manager can mitigate. In a recent project I would love to change the approach, even in the tender phase - but due to it being codified in a tender and a contract the organizational challenge is up hill. As usual it's anonymized generalizations from active projects of mine.

So this nugget is for you my reader - I hope you get to apply these key learnings.

Learning 1: Do a Cut-over pr. Environment

Many contracts stipulate that the running of the solution should be taken over from A to B by one set date - the cut-over date. This includes taking a full-system snapshot at A, spool it to B, spin it up, move all the integrations and test it. An exercise that usually takes the full weekend for Prod. Often we can do non-prod the weekend before.

But all the environments are live systems in each their state. Prod has the most recently deployed release R, but the next release R+1 might already be staged on Pre-prod. Or at least release R+1 is the state of the test environment at cut-over. Dev is where the R+2 release is being build. There is added complexity that each of these environments have each their set of active integrations, while Dev might not have any test and preprod might - and the data on each of the environments are in all directions. We cannot spool a Vendor B snapshot of Prod into all the coming environments.

We also need to consider that Vendor B developers are the ones building the R+2 release, as it is after the cut-over data. The testers of Vendor B are testing R+1. While prod and customer is important, so is the coming release cadence.

I have succeeded in one transition project to have cut-over pr. environment, that besides cutting-over at the agreed time reduced the risk, build in testing the migration and supported on-going releases. We used one cut-over pr. environment. So I know it can work, and that it can be viable option.

Two weeks before Prod cut-over and during day time, the dev environment in site B became the "real"/active Dev. When that was completed we shifted the Test environment, the A testers wrapped up their testing and the B-team started their stuff. Working on the release the B dev's had in the B Dev environment. Same story for preprod, though this was during a weekend to simulate prod. And then finally Prod, where the master migration plan had built in learnings form all the four previous cut-over activities.

Lesson 2: Test step-wise from the ground up

I mentioned the various test steps above. Another pitfall is to be to ambitious in the scope of the functional and integration testing. Often the customer C want to see end-to-end business flows working in the new test environment. Vendor A should have secured that the release being migrated is in a functional working business state. While I understand the quest for testing full business flows, this is not classic functional testing. The key change here is not the functionality, but everything around it - the data center, the build tools , the team.

To go fast - go slow. The testing that is key to Vendor B is first of all that the system can be moved, build and deployed in the B test environment, and is coherent on it's own. This is where stubs are a strategic choice.

Test Stubs as a Strategic Choice
A test stub is a simulator for an external system that you are not in charge of. It still happens in 2025 that systems are designed with a too big dependency on externally controlled systems. Building stubs enables a better strategic choice for the system owner.

Only when the system works stubbed, should external integrations and even other integrations inside the Owner C be considered. I have one part of my current project being stuck right here, that they cannot build the damn thing they where handed over. And thus all the functional test cases just sits there being blocked.

Lesson 3: Integrations in all environments

Production have a range of active integrations to third parties outside both A,B and C: data sources, fulfillment systems, payment gateways etc. Some integrations might to have changed in years and are being decommissioned, others are being established. Keep a clear list of which environments are in scope for the cut-over for prod - and as mentioned for each additional environment.

The dev environment probably doesn't have any external integrations, but test and preprod might. Once I was pert of a transition that had a dedicated "external integration environment" form there all the changes to the interfaces where usually tested. So we had yet another environment in the mix.

One challenge is that Vendor B would to test from their test environment to an external party E's test environment, while that E-test is being used as "active" by vendor A. And it's rarely a success to have to active hosts for the same purpose from E's perspective. Full elaboration here:

Test Stubs as a Strategic Choice
A test stub is a simulator for an external system that you are not in charge of. It still happens in 2025 that systems are designed with a too big dependency on externally controlled systems. Building stubs enables a better strategic choice for the system owner.

The might not even longer be an active test environment on E's side of things. There might only be a prod version. A test system request to E might cost thousands of our local currency. The Externals, especially banks, might only have one active end point for security purposes. So while there might be an intention from C to want to see all integrations in all environments, it's simply not feasible. Also considering that (test) data needs to be aligned out towards every integration point.

What we can to is to make an analysis, where we map all environments and all integrations and score how their are tested: N/A, STUB, LIVE. And then base the testing on that, this might leave some for special handling during production cut-over but that is usually more manageable than dotting all the i's in advance.

There needs to be an active dialog with all the integration parties to keep them in the loop and have their input to how their integrations to C's system can be moved from A to B: whitelisting IP ranges, updating certificates and ports.

My preferred test approach is to have a "connectivity" test first, that establish basic network, authentication and port connections. A ping test - one ping only, if you know what I mean. Only after that can we step up to more functional testing and API-based interaction confirmation.