The UK Ministry of Defense is running over 400 AI projects, each one watched over by a Responsible AI Senior Officer. These are the people meant to keep things ethical. The rules are all there: fairness, accountability, human oversight. The idea is to stop things like accidental escalations, messy procurement, and those awkward moments when nobody knows who is responsible for what the AI just did.
But here’s the catch: the 2025 report says there just aren’t enough people who can actually do this job. The problem isn’t a lack of tech skills. It’s finding leaders who can handle the messy, grey areas of ethics when the pressure is on.
It turns out, watching over AI isn’t just about knowing the rules. It’s about having emotional intelligence, good judgment, and the guts to make tough calls. The old way of leading—where you just had to know the right answer—doesn’t work here. With AI, the hard questions aren’t just theory. They’re real, and they show up every day.
So why does this matter? If even the Ministry of Defense, with all its money and rules, can’t find enough people to keep AI in check, what hope do the rest of us have? It’s a warning: writing up frameworks and handing out job titles is easy. Actually being ready to handle real-world AI is something else entirely.
Before you set up another AI committee, ask yourself: do you have anyone who can actually tell when an AI has gone too far? Can they explain why it matters, and are they willing to pull the plug if they have to? The Ministry’s experience says most of us don’t—and you can’t just teach these skills overnight.