Why Our Zero-Trust Rollout Took Twice as Long as Planned


We kicked off our zero-trust architecture project in January with a six-month timeline. We’re now in December, and we’re still not quite done. If you’re planning a similar rollout, here’s what actually happened versus what the consultants promised.

The Vendor Timeline Was Fantasy

Our security vendor painted a rosy picture during the sales cycle. Three months for planning and architecture, three months for implementation. Clean, simple, done by mid-year. The reality was far messier.

What they didn’t account for was the archaeological dig through our network topology. We discovered applications we’d forgotten about, service accounts nobody could identify, and interdependencies that existed nowhere in our documentation. Mapping these took months, not weeks.

The vendor’s implementation methodology assumed we had perfect knowledge of our environment. We didn’t. Most organisations don’t. Any timeline that doesn’t include substantial discovery time is fiction.

Legacy Applications Became the Long Pole

Zero-trust sounds great until you have to apply it to a fourteen-year-old financial system that authenticates via IP address ranges. We have dozens of these legacy applications, each requiring creative workarounds.

Some applications couldn’t support modern authentication protocols without significant rework. Others had been patched so many times that nobody on staff understood their authentication flow. We ended up creating exception policies that somewhat defeated the purpose of zero-trust in the first place.

The project became less about implementing zero-trust and more about application modernisation. That’s important work, but it’s a different project with different resource requirements.

User Experience Friction Caused Delays

We rolled out multifactor authentication organisation-wide as part of the project. The complaints started immediately. Sales teams working from client sites had connectivity issues. Remote workers struggled with the authenticator app. Executives wanted exceptions because the extra steps were “inconvenient.”

We had to slow the rollout and run extensive education sessions. We adjusted policies to reduce prompt frequency for trusted devices. We created better documentation and help desk scripts. All of this took time we hadn’t allocated.

The technical implementation was straightforward. The people side was the hard part. We should have spent more time upfront on change management and communication.

Third-Party Integrations Were a Nightmare

Our environment includes dozens of SaaS applications and several partner integrations. Getting zero-trust working across all of these boundaries required coordination with multiple external parties.

Some vendors supported the protocols we needed. Others required custom integration work. A few couldn’t support zero-trust principles at all, forcing us to maintain legacy authentication methods or find alternative solutions.

Each integration became a mini-project with its own timeline, stakeholders, and technical challenges. The vendor’s timeline assumed we controlled our entire technology stack. We don’t. Nobody does anymore.

Network Segmentation Took Months

Implementing proper network segmentation meant understanding traffic flows across hundreds of applications and thousands of endpoints. Every segment we created risked breaking something critical.

We took a careful, incremental approach. Segment a subset of the network, monitor for issues, fix what broke, then move to the next segment. This was the right approach for risk management, but it was slow.

We also discovered that our network documentation was outdated. Firewall rules had accumulated over years without proper cleanup. We spent significant time just understanding what traffic was legitimate versus legacy noise.

What I’d Do Differently

If I were starting this project again, I’d double the timeline estimate from day one. Zero-trust is a multi-year journey, not a six-month sprint. Pretending otherwise just sets unrealistic expectations.

I’d also allocate more resources to application discovery and dependency mapping upfront. The time spent here pays dividends throughout the implementation. We lost weeks troubleshooting issues that better discovery would have prevented.

Finally, I’d invest heavily in change management from the beginning. The technology is solvable. Getting users on board is harder. Better communication, training, and support would have accelerated adoption.

Was It Worth It?

Despite the timeline overruns, yes. Our security posture improved significantly. We have better visibility into our environment. We eliminated some risky legacy authentication methods. The project forced us to modernise applications that needed it anyway.

But anyone selling you a quick zero-trust implementation is lying. This is complex, disruptive work that touches every corner of your IT environment. Plan accordingly, set realistic expectations, and prepare for the long haul. Your board and your users will thank you for the honesty.