Cloud Migration Regrets: What I Wish I'd Known
We finished our major cloud migration in late 2021. Four years on, I’m still dealing with decisions we made in those frantic planning sessions. Some worked brilliantly. Others haunt me every budget cycle.
Here’s what I wish someone had told me before we started.
We Underestimated Data Egress Costs
This one kills me. We modelled compute, storage, and some network transfer. But we completely missed how much it would cost to move data out of the cloud for analytics, backup validation, and partner integrations.
Our AWS bill has a line item for data transfer that runs $18K-$22K monthly. It wasn’t in the original business case. Finance still asks me about it.
The lesson: map every data flow that crosses cloud boundaries. Model it at 3x your current volume. You’ll grow faster than you think.
Lift-and-Shift Wasn’t the Quick Win We Expected
We moved 40% of our applications with minimal changes. “Get to cloud fast, optimise later” was the strategy. Three years later, those apps are still running on oversized VMs because no one has time to re-architect them.
They’re costing us 60-70% more than cloud-native equivalents would. But the business case to fix them never quite makes it through prioritisation. There’s always something more urgent.
If I could do it again: take longer up front. Re-architect properly or don’t move it at all. Lift-and-shift creates technical debt that’s incredibly hard to pay down later.
We Locked Ourselves Into One Vendor
“Multi-cloud is too complex,” we said. “Let’s standardise on AWS and get good at it.”
That was the right call for about 18 months. Then we acquired a company running entirely on Azure. Then our largest customer mandated GCP for a major integration project.
Now we’re multi-cloud by accident, not design. We don’t have proper governance across platforms. We’re paying for duplicate tooling. And we definitely don’t have the expertise we need.
I’m not saying you should architect for multi-cloud from day one. But have a strategy for it. Know which workloads could move if needed. Keep your infrastructure-as-code portable.
Shared Responsibility Wasn’t Well Understood
Two security incidents taught us this lesson the hard way. In one case, we assumed AWS was encrypting something they weren’t. In another, we’d misconfigured an S3 bucket because we didn’t really understand the permission model.
No data was exposed, thankfully. But the post-mortems were uncomfortable.
The cloud provider handles infrastructure security. You handle everything from the OS up, plus configuration, access management, and data protection. That boundary isn’t always obvious, especially for platform services.
We should have invested more in cloud security training before migration. We tried to learn as we went. That was a mistake.
FinOps Should Have Started Earlier
We waited nine months after go-live to build proper cost monitoring and allocation. By then, we had orphaned resources all over the place, development environments that never shut down, and no clear way to charge back costs to business units.
Getting control of that sprawl took a dedicated engineer six months. We probably wasted $200K in unnecessary spend during that period.
Start FinOps on day one. Tag everything. Build cost dashboards before you build applications. Make budget alerts mandatory, not optional.
The Skills Gap Was Real
We trained our infrastructure team on cloud fundamentals. We sent people to AWS certification courses. We thought we were ready.
We weren’t. Cloud operations are genuinely different from on-premise. The troubleshooting mindset is different. The security model is different. The cost model creates entirely new pressures.
It took us two years to really build cloud-native expertise. We should have hired experienced cloud engineers earlier, even if just as contractors to guide the team.
What Actually Worked
Not everything was a mistake. We got a few things right.
We mandated infrastructure-as-code from the start. Every resource in Terraform. That discipline has paid off repeatedly. We automated our disaster recovery testing. We built strong CI/CD pipelines early. We established clear environment segregation.
And despite my complaints about vendor lock-in, our deep AWS knowledge has enabled some sophisticated solutions we couldn’t have built otherwise.
Would I Do It Again?
Absolutely. Cloud has been the right move. Our infrastructure is more reliable. We ship faster. We scale better.
But I’d do the migration itself very differently. More planning. More training. More conservative timelines. And a proper business case that included all the hidden costs.
If you’re planning a cloud migration now, learn from our mistakes. Take your time. Model the real costs. Build the skills before you need them.
Your future self will thank you.