Building an Internal AI Centre of Excellence
Every large Australian organisation seems to be building an AI Centre of Excellence right now. Or talking about building one. Or hiring a consultant to write a proposal for one.
I’ve helped set up three of these in the past eighteen months, and I’ve watched two others collapse within a year of launch. The difference between success and expensive failure comes down to a few decisions made in the first sixty days.
What an AI CoE Actually Does
Before you build anything, be honest about what problem you’re solving. Most organisations want an AI CoE for one of four reasons:
- Coordination. Multiple teams are experimenting with AI independently, duplicating effort and creating governance headaches.
- Capability building. The organisation lacks AI skills and needs a central group to develop them.
- Strategic direction. Leadership wants someone to answer “what should we be doing with AI?” in a way that connects to business outcomes.
- Political signalling. The board wants to see an AI strategy, and a CoE sounds impressive in the annual report.
Reason four is more common than anyone admits. If that’s your primary driver, save the money. Publish a strategy document, appoint someone with “AI” in their title, and move on. An actual CoE requires actual commitment.
The Staffing Question
Most failed CoEs share the same root cause: they couldn’t attract or retain the right people.
You need a mix of skills that’s genuinely hard to assemble. Data scientists who understand business problems, not just algorithms. Engineers who can build production systems, not just notebooks. Business analysts who can translate between technical and executive audiences. And a leader who has credibility with both the C-suite and the technical team.
In Australia’s current market, experienced AI practitioners command $200,000-$300,000 packages. The Technology Council of Australia has documented the talent shortage extensively. You’re competing with consultancies, tech companies, and every other enterprise building their own CoE.
My advice: don’t try to hire a complete team on day one. Start with two or three experienced people and grow. Partner with external specialists for the first 6-12 months while building internal capability. Team400 is one firm I’ve seen work well in this embedded advisory model, where external consultants work alongside internal teams rather than delivering reports from a distance.
Governance From Day One
The AI CoEs that survive their first year all have one thing in common: clear governance frameworks established before the first project launches.
This means:
- A project intake process. How do business units request AI support? Who prioritises?
- Data access policies. What data can the CoE access? Under what conditions? With what approvals?
- Model deployment standards. What testing, validation, and monitoring requirements must be met before anything goes to production?
- Ethical guidelines. Not vague principles—specific decision frameworks for common dilemmas.
Without these, your CoE will spend its first year arguing about process rather than delivering value. I’ve watched it happen. It’s painful.
The First Project Matters Enormously
Pick your first project carefully. It should be:
- Achievable within 8-12 weeks
- Connected to a measurable business outcome
- Visible enough that success will be noticed
- Simple enough that failure won’t be catastrophic
Don’t start with your most complex, highest-value use case. Start with something you’re confident you can deliver. Build credibility, then take on harder problems.
One organisation I worked with chose invoice processing automation as their first project. Not glamorous. But it saved 400 hours per month of manual work within three months, and suddenly the CoE had internal champions who’d experienced real results.
Another chose predictive customer churn modelling. Technically interesting, but the results took eight months to validate, and by then the executive sponsor had moved to a different role. The CoE lost its political protection before it could demonstrate value.
Embedding vs Centralising
The biggest structural decision: should the CoE be a central team that other business units come to, or should it embed people within business units?
I’ve seen both work. The hybrid model tends to be most effective—a small central team that maintains standards, tools, and governance, with individual practitioners embedded in business units for specific projects.
The central-only model struggles with relevance. A team that sits apart from the business tends to pursue technically interesting projects that don’t align with commercial priorities. The embedded-only model struggles with consistency. Without central standards, you get fragmented approaches and duplicated infrastructure.
What Failure Looks Like
Failed AI CoEs share recognisable patterns. The team builds impressive demos that never reach production. Projects take too long and cost too much. Business units lose patience and start hiring their own data scientists, recreating the coordination problem the CoE was supposed to solve.
If you’re nine months in and your CoE hasn’t put anything into production, that’s a warning sign. If business units are bypassing the CoE to work with external vendors directly, that’s a bigger warning sign.
The fix is usually not more resources. It’s better focus. Narrow the scope. Pick fewer projects. Deliver something real. Then expand.
An AI Centre of Excellence is worth building if you’re willing to invest properly, staff it well, and give it both the authority and accountability to succeed. Half-measures produce expensive failures. I’ve seen enough of those to know.