Mar 29, 2026
Insights from our latest webinar
AI is showing up across organisations faster than most leadership teams expected.
Tools are being used. Pilots are running. Subscriptions are being purchased.
On paper, it can look like progress.
But as discussed in our recent Govn365 webinar with Gerry Lynch and Mark Laurence:
Most organisations are busy with AI — but far fewer are actually changing how they work.
And that gap is where AI governance risk begins.
As AI adoption accelerates, AI governance is quickly becoming a priority for boards and leadership teams — not just from a risk perspective, but from a value perspective.
While this discussion focused on AI, the underlying governance challenge is not unique to AI.
The same patterns are now showing up across other emerging risk areas — particularly cyber.
Many boards are starting to ask:
Are we seeing real value from AI — or just activity?
Because there’s a difference.
Running pilots, appointing an “AI lead”, or trialling tools can create momentum. But none of it matters unless it leads to:
Otherwise, AI becomes an accessory — not a capability.
One of the clearest themes from the session was this:
AI is not a technology challenge. It’s a leadership and culture challenge.
Boards should not be asking:
They should be asking:
The organisations seeing traction are focusing on three areas:
Without that clarity, AI initiatives tend to drift — or stall.
A consistent theme across organisations is this:
AI adoption is happening — whether leadership is ready or not.
When organisations don’t provide:
people find their own way.
This is where shadow AI risk emerges — and with it:
In many cases, the biggest risk isn’t misuse.
It’s pretending AI isn’t already in use.
A common instinct is to respond to AI with policy first.
But the session made this clear:
Governance should follow understanding — not lead it.
When policies are introduced too early, they tend to:
Effective AI governance frameworks are simple and enabling.
At a minimum, organisations should have:
And importantly:
The best policies explain what people can do — not just what they can’t.
Another key insight: boards often receive updates on AI activity — but not on progress.
A more useful lens for AI governance and oversight is tracking movement across three stages:
Access → Habit → Value
That includes visibility on:
Without this, it’s difficult to distinguish momentum from noise.
AI doesn’t remove accountability.
Decisions are still made by people — AI simply informs them.
Which raises a critical question for boards:
Failures in this space are rarely technical.
They are:
Many of the same governance gaps are now showing up in cyber.
Boards are receiving:
But still asking:
The pattern is consistent:
Information is increasing — but confidence in oversight isn’t always keeping pace.
Which is exactly where governance needs to evolve.
The organisations that are getting AI right are not the ones moving fastest.
They are the ones aligning:
Because that’s where value shows up.
And until those three move together, AI will continue to look more advanced on paper than it is in practice.
Want to explore this further? Watch the full webinar discussion here
If you’d like to explore how this applies in practice — particularly in a cyber context — we’re covering this in our upcoming session: Cyber Governance in Practice: What Boards & CEOs Need to Get Right Now
If you’re starting to think about how this applies in your organisation, we’re always open to a conversation.
Book in a chat with us here.
Mar 12, 2026
Mar 2, 2026
Nov 26, 2025
Sep 29, 2025
Aug 12, 2025
Jul 3, 2025
Jun 4, 2025
May 1, 2025
Apr 2, 2025
Mar 11, 2025