The AI Governance Problem Is Getting Worse, Not Better
A year ago, I wrote about the need for AI governance frameworks in enterprise IT. Since then, a lot of organisations have actually built them. Policies were drafted, approval workflows were created, risk assessment templates were circulated. On paper, AI governance exists in most mid-to-large Australian organisations.
On the ground? It’s a mess.
The problem isn’t that companies ignored governance. It’s that the landscape moved faster than anyone’s policy documents could keep up with. The governance frameworks built in 2024 and early 2025 were designed for a world where AI adoption meant selecting a vendor, running a pilot, and rolling out a controlled deployment. That world no longer exists.
The Governance Gap Has Widened
Here’s what changed. In 2024, AI tool adoption in most organisations was centralised enough that IT could track it. You had a handful of approved tools — maybe an enterprise ChatGPT licence, a customer service chatbot, a couple of analytics platforms with AI features. Governance meant reviewing those tools, setting usage policies, and monitoring compliance.
By early 2026, the average enterprise has dozens of AI-powered tools in active use across departments. Many were adopted without IT involvement. Marketing runs AI content generation tools. Finance uses AI for forecasting and anomaly detection. HR has AI screening in recruitment. Product teams build with AI-assisted coding tools. Operations uses predictive maintenance models. Each department selected their own tools, often signing up with a credit card and a terms-of-service checkbox.
Gartner’s latest survey found that 68 percent of enterprise AI tools in use were adopted without formal IT approval. That’s not a governance gap. That’s a governance chasm.
Three Specific Problems Nobody Solved
Data sovereignty remains unclear. When an employee pastes customer data into an AI tool hosted overseas, where does that data go? Most AI vendor terms of service are vague about data residency, retention, and usage for model training. The Australian Privacy Act imposes obligations on organisations regarding cross-border data flows, but most companies have no visibility into which AI tools are sending data offshore.
Model drift accountability is undefined. AI models degrade over time as the data they were trained on becomes stale. Who’s responsible for monitoring model performance in production? In most organisations, the answer is nobody. The team that built the model has moved on. IT wasn’t involved in the first place. And the business unit using the model doesn’t have the technical capability to assess whether its outputs are still reliable.
Shadow AI creates security blind spots. Every unapproved AI tool is an unapproved data pipeline. It’s another authentication credential to manage, another attack surface, another vendor with access to organisational data. Security teams can’t protect systems they don’t know exist.
Why Existing Frameworks Don’t Work
Most AI governance frameworks I’ve seen are structured like traditional IT governance: centralised review, approval workflows, annual audits. They assume a linear adoption process where IT evaluates tools before they enter the organisation.
That assumption is broken. AI tools enter organisations from the bottom up, adopted by individual employees and teams long before IT is aware. By the time governance catches up, the tool is embedded in workflows, integrated with other systems, and generating outputs that inform business decisions. Removing it is disruptive. But governing it retroactively is extraordinarily difficult.
The other problem is speed. Traditional governance cycles — quarterly reviews, annual risk assessments — are too slow for a technology category that releases significant new capabilities monthly. By the time a governance committee reviews an AI tool, it’s been updated three times and its capabilities have changed.
What Actually Needs to Happen
I’ve been working with specialists in this space to think through what modern AI governance looks like, and the consensus is that the old model needs fundamental redesign, not incremental improvement.
Continuous monitoring, not periodic review. Governance has to be automated and ongoing. Tools exist that scan network traffic for AI service connections, flag new SaaS signups, and monitor data flows to AI endpoints. If you’re relying on annual audits to catch ungoverned AI usage, you’re a year behind.
Departmental accountability with central standards. IT can’t be the sole gatekeeper for AI adoption across the entire organisation. It’s too slow and creates too much friction. Instead, set organisation-wide standards — data handling requirements, approved vendors, risk thresholds — and give departments the authority to adopt tools that meet those standards without requiring case-by-case IT approval.
Mandatory data classification. Before any AI tool receives organisational data, that data needs to be classified. Public, internal, confidential, regulated — each classification maps to different rules about which tools can process it and where. Most organisations haven’t done this work, and it’s a prerequisite for meaningful AI governance.
Vendor management discipline. Every AI vendor should have a reviewed and approved data processing agreement. Not just the terms of service — an actual negotiated agreement that addresses data residency, retention, model training exclusions, and breach notification. If a vendor won’t sign one, they shouldn’t process your data.
The Board Dimension
One more thing. AI governance isn’t just an IT problem — it’s a board-level risk management concern. The reputational, regulatory, and operational risks of ungoverned AI use are significant enough that boards need visibility and reporting.
I’ve seen a few organisations appoint a Chief AI Officer or elevate AI governance to a board committee. Whether that specific structure makes sense depends on the organisation, but the principle is right: someone with authority and accountability needs to own this at the executive level.
The organisations that treat AI governance as a compliance checkbox will get caught out. The ones that treat it as an evolving, strategic function will be much better positioned. This problem isn’t going away. It’s accelerating.