Mar 29, 2026

AI Is Moving Fast — But Governance Is Falling Behind

Insights from our latest webinar

AI is showing up across organisations faster than most leadership teams expected.

Tools are being used. Pilots are running. Subscriptions are being purchased.

On paper, it can look like progress.

But as discussed in our recent Govn365 webinar with Gerry Lynch and Mark Laurence:

Most organisations are busy with AI — but far fewer are actually changing how they work.

And that gap is where AI governance risk begins.

As AI adoption accelerates, AI governance is quickly becoming a priority for boards and leadership teams — not just from a risk perspective, but from a value perspective.

While this discussion focused on AI, the underlying governance challenge is not unique to AI.

The same patterns are now showing up across other emerging risk areas — particularly cyber.

The uncomfortable question for boards on AI governance

Many boards are starting to ask:

Are we seeing real value from AI — or just activity?

Because there’s a difference.

Running pilots, appointing an “AI lead”, or trialling tools can create momentum.
But none of it matters unless it leads to:

  • changes in how people make decisions
  • improvements in how work gets done
  • measurable outcomes at an organisational level

Otherwise, AI becomes an accessory — not a capability.

Start with value — not technology

One of the clearest themes from the session was this:

AI is not a technology challenge. It’s a leadership and culture challenge.

Boards should not be asking:

  • What tools are we using?

They should be asking:

  • What problem is this solving?
  • Where is value actually being created?

The organisations seeing traction are focusing on three areas:

  • improving internal knowledge and communication
  • increasing customer-facing efficiency
  • lifting decision quality at leadership level

Without that clarity, AI initiatives tend to drift — or stall.

The hidden risk: shadow AI and lack of governance

A consistent theme across organisations is this:

AI adoption is happening — whether leadership is ready or not.

When organisations don’t provide:

  • clear guidance
  • sanctioned tools
  • practical guardrails

people find their own way.

This is where shadow AI risk emerges — and with it:

  • data risks
  • inconsistent usage
  • lack of oversight

In many cases, the biggest risk isn’t misuse.

It’s pretending AI isn’t already in use.

What good governance looks like in practice (AI and cyber)

A common instinct is to respond to AI with policy first.

But the session made this clear:

Governance should follow understanding — not lead it.

When policies are introduced too early, they tend to:

  • be overly restrictive
  • create fear
  • slow adoption

Effective AI governance frameworks are simple and enabling.

At a minimum, organisations should have:

  • clear data boundaries (what can and can’t be used in AI tools)
  • a tiered view of tools and risk levels
  • a clear escalation path

And importantly:

The best policies explain what people can do — not just what they can’t.

What boards should track for effective AI governance

Another key insight: boards often receive updates on AI activity — but not on progress.

A more useful lens for AI governance and oversight is tracking movement across three stages:

Access → Habit → Value

That includes visibility on:

  • how widely tools are available
  • how consistently they’re being used
  • whether they’re delivering measurable outcomes

Without this, it’s difficult to distinguish momentum from noise.

The real governance challenge: accountability in AI

AI doesn’t remove accountability.

Decisions are still made by people — AI simply informs them.

Which raises a critical question for boards:

  • Are the right human checkpoints in place?
  • Are people equipped to challenge AI outputs?
  • Do processes support oversight — or bypass it?

Failures in this space are rarely technical.

They are:

  • capability issues
  • process design gaps
  • or governance blind spots

Why this matters for cyber governance

Many of the same governance gaps are now showing up in cyber.

Boards are receiving:

  • regular reporting
  • technical updates
  • assurance from management

But still asking:

  • Do we have real visibility into risk?
  • Are we relying too heavily on reporting without challenge?
  • Where might false confidence be creeping in?

The pattern is consistent:

Information is increasing — but confidence in oversight isn’t always keeping pace.

Which is exactly where governance needs to evolve.

Final thought

The organisations that are getting AI right are not the ones moving fastest.

They are the ones aligning:

  • people
  • process
  • and purpose

Because that’s where value shows up.

And until those three move together, AI will continue to look more advanced on paper than it is in practice.

Want to explore this further?
Watch the full webinar discussion here

If you’d like to explore how this applies in practice — particularly in a cyber context — we’re covering this in our upcoming session: Cyber Governance in Practice: What Boards & CEOs Need to Get Right Now

 

If you’re starting to think about how this applies in your organisation, we’re always open to a conversation.

Book in a chat with us here.