The Year We Finally Ditched Our Data Centre


We shut down our last on-premises data centre three months ago. The facility that had been our infrastructure backbone for fifteen years is now empty. All our workloads are running in cloud infrastructure. It was the right decision, but it wasn’t simple or cheap.

Here’s what the data centre exit actually involved and whether I’d do it again.

Why We Kept the Data Centre So Long

We’d been talking about moving to cloud for years. Every budget cycle, someone would propose the migration. Every time, we’d defer it for another year.

The existing data centre was paid for. We owned the equipment. We had staff who knew how to maintain it. Moving to cloud meant taking on new operational expenses and learning new tools. The perceived risk of change outweighed the benefits of moving.

The decision point came when we faced a hardware refresh cycle. Our storage arrays were end-of-life. Our servers needed replacement. We could spend significant capital on new equipment, or we could use that forcing function to finally move to cloud.

The economics of buying new hardware versus cloud operating costs made the decision easier. Once we factored in power, cooling, facilities costs, and staff time, cloud was cost-neutral or better. Without the capital outlay for new equipment, cloud actually reduced our total spend.

The Migration Was Harder Than Expected

We classified our applications into three categories for migration. Simple web applications and file services that could move easily. Complex enterprise applications that needed careful planning. Legacy applications that might need refactoring or replacement.

The simple migrations went fine. We moved file shares to cloud storage, migrated web applications to cloud VMs, and shifted some databases to managed database services. These migrations took weeks and went smoothly.

The enterprise applications were harder. Our ERP system required coordination with the vendor to ensure cloud hosting was supported. Our CRM needed performance testing in the cloud environment because user experience is sensitive to latency. Our business intelligence platform needed substantial architectural changes to work efficiently in cloud.

Each of these migrations was a project unto itself with planning, testing, migration, and validation phases. We couldn’t just lift and shift. We needed to rearchitect for cloud.

Legacy Applications Created the Real Complexity

We had a dozen applications that were end-of-life but still business-critical. The vendors no longer supported them. We had minimal documentation. They’d been running in our data centre for years with minimal maintenance.

These applications couldn’t move to cloud without significant work. Some required specific operating system versions that weren’t available in cloud. Others had hard-coded IP addresses and network dependencies. A few had licensing restrictions that prohibited cloud deployment.

We had to make hard decisions. Some applications got refactored to work in cloud, which required development time we didn’t really have. Others got replaced with modern alternatives, which required budget and change management. A couple stayed on-premises longer in a small colocation facility while we figured out long-term solutions.

The legacy applications added months to our timeline and significant cost to the project.

The Hidden Costs of Cloud Migration

The migration project had a budget, but we underestimated several cost categories. Cloud architecture design required external expertise we didn’t have internally. We brought in consultants to design our landing zone, network architecture, and security model.

Staff training was another underestimated cost. Our infrastructure team knew on-premises systems inside and out. They needed substantial training in cloud platforms, infrastructure as code, and cloud-native architecture patterns. We sent people to certification courses and brought in trainers for team workshops.

The biggest hidden cost was maintaining both environments during migration. We ran parallel infrastructure for six months while migrating workloads. We paid for on-premises data centre costs and cloud costs simultaneously. The double-running cost was substantial but unavoidable.

Operations Changed Fundamentally

Moving to cloud wasn’t just a technical migration. It changed how we operate infrastructure.

On-premises, we managed physical hardware, dealt with facilities issues, and planned capacity years in advance. Our skills were hardware-centric. In cloud, we manage infrastructure through code, monitor services through cloud-native tools, and scale capacity dynamically. The skills required are completely different.

We had to retrain staff, hire people with cloud expertise, and change our operational processes. Incident response works differently in cloud. Change management needs different workflows. Cost management becomes a daily operational concern rather than an annual capital planning exercise.

The transition disrupted our team. Some people thrived with the new technologies. Others struggled with the pace of change. We lost a couple of senior staff who preferred the on-premises world and didn’t want to retrain for cloud.

What We Got Right

We formed a dedicated migration team rather than trying to do it alongside business-as-usual work. This team owned the project full-time for six months. Having dedicated resources made the difference between a six-month migration and a multi-year drag.

We also piloted the migration with non-critical applications first. This let us learn cloud operations, identify issues with our approach, and refine our processes before migrating business-critical systems. The early mistakes happened in low-risk environments.

Finally, we involved application teams early rather than treating migration as purely an infrastructure project. Getting developers and application owners engaged from the beginning meant applications were migrated in ways that made sense for how they worked, not just lifted and shifted blindly.

What I’d Do Differently

I’d push harder to replace legacy applications before migration rather than trying to move them to cloud. We wasted time trying to make old software work in new environments. Replacement would have been faster and given us better long-term outcomes.

I’d also invest more in cost optimisation from day one. We migrated workloads first and optimised later. This meant we ran several months with inefficient cloud architectures that cost more than necessary. Building optimisation into the migration would have controlled costs better.

Finally, I’d allocate more time for staff learning and adjustment. We pushed hard to meet timeline commitments, which stressed the team. A slightly longer timeline with more training would have been better for morale and probably would have delivered a better technical outcome.

Was It Worth It?

Absolutely. We’re no longer managing physical infrastructure, planning hardware refresh cycles, or dealing with data centre facilities issues. We’ve eliminated capital expenditure for hardware. Our ability to scale infrastructure matches business needs rather than being constrained by physical capacity.

The operational cost is higher than our on-premises cost was, but not dramatically so once you factor in everything honestly. We’re getting better value for that cost through flexibility and reduced management overhead.

Most importantly, we’re no longer running a data centre. That frees up IT resources to focus on things that actually differentiate our business rather than keeping the lights on in a server room. That shift in focus is worth far more than the direct cost savings.