top of page

AI governance overview: stop panicking and fix the basics

  • 5 minutes ago
  • 6 min read
Cartoon girl in a plaid suit smiles in a futuristic room. Text on the right reads: "AI governance overview: stop panicking and fix the basics."

Introduction

Everywhere I look, organisations are talking about AI governance like it’s an emergency.

Some are rushing to be seen as AI-first. Others are trying to chase every new setting, every new feature and every new portal toggle that shows up in their tenant. Both reactions are understandable. Neither one is especially healthy.

I get it. AI is loud right now. It’s exciting, messy, commercially attractive and full of risk at the same time. Boards want a strategy. Leaders want quick wins. Security and compliance teams are being told to make the environment “AI-ready”, often without the time, clarity or foundations to do that properly.

That’s where I think a lot of organisations are getting this wrong.

This post is my AI governance overview for people who feel like they’re stuck between hype and panic. My view is pretty simple. You do need AI governance. You do need controls. You do need to understand the risks. But if your fundamentals are weak, piling on AI-specific controls won’t magically fix the real problem.


Table of contents


Why AI governance feels so messy right now

AI governance has become one of those phrases that means everything and nothing at the same time.

For some people, it means controlling which tools users can access. For others, it means dealing with data exposure, risky prompts, insider risk, oversharing, compliance, retention, agent sprawl, third-party apps, licensing, procurement, or all of the above.

That’s part of the problem. The conversation gets so broad that people lose sight of what actually matters first.

A lot of organisations are treating AI as if it exists in a separate universe. It doesn’t. It sits on top of your existing estate. Your users. Your identities. Your devices. Your access model. Your permissions. Your data. Your mess.

If those things are weak, your AI risk is already higher than it should be.


The first trap, buying AI to look AI-first

The first type of organisation I keep seeing is the one that wants to be seen as ahead of the curve.

They want to say they’re a frontier company. They want to talk about agents, AI apps and assistants in every meeting, every marketing deck and every PR update. So they buy Copilot licences and other AI tooling at pace, often before they’ve worked out where the value is, who actually needs access and what controls should exist around it.

That creates a few obvious problems.

First, they don’t always stop to ask whether everyone actually needs the licence they’re being given. A shiny rollout is not the same thing as a useful rollout.

Second, they allow a free-for-all culture around AI use. People start experimenting with tools, plugging things into workflows, uploading content, building small apps and creating processes that nobody is really watching.

Third, governance turns into theatre. It looks modern from the outside, but under the surface there’s no real clarity around risk ownership, data handling, acceptable use, access control or monitoring.

I’m not against adopting AI quickly. I am against buying your way into the appearance of maturity.


The second trap, chasing every AI setting

The second type of organisation is almost the opposite.

They’re worried about AI, which is fair enough, and they’re trying to be “AI-ready”. So they keep scanning for every new feature, every new recommendation and every new portal setting Microsoft releases. It becomes a constant whack-a-mole exercise.

A new AI control appears. Everyone panics.A new setting lands in a different portal. Everyone panics again. A new product gets announced. Someone adds it to a risk tracker before anyone has worked out whether it even applies.

Yes, those things matter. You should care about AI-specific controls. You should understand things like prompt injection risk, agent visibility and data exposure. You should know what’s enabled, what’s blocked and what’s being monitored.

But I think a lot of teams are focusing on the wrong layer first.

I still see organisations talking about AI governance while their Conditional Access is weak. They allow BYOD with very little control. They accept weak authentication methods. They don’t follow least privilege. They don’t use just-in-time access. Their users are over-permissioned. Their admins are over-permissioned. Their SharePoint sites, Teams, groups and apps are over-permissioned. Anonymous or overly broad sharing is still common. Public groups still exist where they shouldn’t. Visibility is poor. Data protection is inconsistent. Sensitivity labels aren’t properly in place. DLP is patchy. Insider risk is missing or immature. Stale data keeps piling up and nobody wants to deal with it.

That isn’t an AI governance problem. That’s a fundamentals problem.

AI just makes it more obvious.


AI governance overview starts with the basics

If I had to reduce this whole conversation to one point, it would be this.

AI governance starts with good security, good identity hygiene and good data governance. Not the other way around.

At Threatscape, we keep coming back to the same order for a reason. Entra, Defender and Intune are your first line of defence. Then you look hard at permissions and access across Teams, Exchange, SharePoint and OneDrive. Then you use Purview and the rest of the stack to protect, classify, monitor and govern the data properly.

That sequence matters.

Who is using the AI tools? Humans. Who is creating AI tools in most organisations? Humans. Who can expose data, build risky workflows, over-share content or misuse access? Humans.

So if the user who can access or build AI tooling gets compromised, you have a much bigger issue than whether a niche AI setting was switched on in some admin portal.

This is the bit I wish more organisations understood. AI governance is not about collecting every AI control like Pokémon cards. It’s about reducing risk in the right order.


What should sit at the top of the risk register

If you want a calmer and more useful approach, I’d prioritise it like this.


1. Identity and access

Start with authentication strength, Conditional Access, privileged access, break-glass design, least privilege and just-in-time access. If identity is weak, everything built on top of it is shakier than it looks.


2. Device trust

Be honest about endpoint hygiene. Are corporate devices actually healthy? Are they managed properly? Are risky devices blocked from sensitive access? Are you allowing unmanaged or lightly managed devices into places they shouldn’t be?


3. Permissions and sharing

This is where so much silent risk lives. Review permissions across SharePoint, Teams, Exchange, groups, apps and sites. Reduce oversharing. Cut back public access. Get rid of “share with anyone” habits where they aren’t justified.


4. Data protection

Then move into labels, DLP, retention, insider risk, audit and visibility. If you don’t understand where your sensitive data is and how it moves, AI will expose that weakness faster.


5. App and licence sprawl

Control who can self-purchase, self-enable or self-build. Make sure there’s a process. AI adoption without software governance is just another version of shadow IT.


6. Stale data

Nobody likes talking about stale data because it’s boring and difficult. It still matters. Keeping mountains of unnecessary content around creates more exposure, more noise and more governance pain.


Conclusion

I’m not saying organisations should ignore AI-specific security and governance controls. They matter, and Microsoft does keep expanding them across Copilot, agents and the wider compliance stack.

What I am saying is this: a lot of organisations are panicking over AI governance when the real issue is that their foundations were already weak. AI did not create that problem. It just made it harder to ignore.

So yes, care about AI. Learn the controls. Understand the features. Keep up where you can.

But don’t confuse activity with maturity.

If your identities are weak, your devices are loosely managed, your permissions are messy and your data is barely governed, you are not behind because you missed one AI setting. You are behind because the basics still need work.

That’s actually good news.

It means the path forward is clearer than people think. Calm down. Fix the fundamentals. Prioritise the risks that genuinely matter. Then build your AI governance on top of something solid.

That’s a much better story than panic, and a much safer one too.

Drop Me a Line, Let Me Know What You Think

Thanks for submitting!

© 2035 by Train of Thoughts. Powered and secured by Wix

bottom of page