AI policy leaders’ series: Mark Bailey, Department Chair for Cyber Intelligence and Data Science at the U.S. National Intelligence University
Mark Bailey on the debate surrounding autonomous weapons, and the interpretability of AI.
The National Intelligence University is a federal research university within the United States government to train the U.S. intelligence community. Mark is the university’s chair for Cyber Intelligence and Data Science, and is also the author of Unknowable Minds: Philosophical Insights on AI and Autonomous Weapons, a book that explores the extent to which we are able to understand modern AI, and contends that the limitations on interpretability have implications for accountability in critical situations.
The views expressed here are Mark’s own and do not necessarily represent the views of the National Intelligence University or the US government.
There are inherent limits to the extent that we can understand the AI “mind.”
I am interested in complexity, and in particular the properties of very complex systems. Typically, the properties that emerge from these systems aren’t predictable. This is a big problem in philosophy and in maths. And it leads to a problem that I call algorithmic incompressibility. This means that there is no shorter description or “shortcut” for the system’s behavior than the behavior itself. In other words, the only way to know what it will do is to run it step by step and watch.
Take Newton’s laws – these were approximations of natural laws, which were good enough for a long time. They allowed us to send people into orbit. Then came Einstein’s special and general relativity, which is an even better approximation. This allowed us to make even better predictions, and enabled things like satellite navigation. Now compare that to markets. These are too complex to make accurate predictions, we don’t have the tools. We have to model behaviours. And that’s what we’re looking at with AI.
AI should not be given the licence to make life-or-death decisions.
I argue in my book that the AI “mind” is fundamentally unknowable. Machine systems solve problems through statistical optimization and emergent dynamics rather than human-style deliberation. So we will often be unable to reconstruct or predict their choices. That knowledge gap creates unacceptable risk when life-and-death decisions are delegated to machines.
We’ve been talking about different autonomy modes: fully autonomous, “human-in-the-loop,” “human-on-the-loop.” But these guardrails are increasingly brittle when speed and complexity rise. Supervisory humans can become outpaced, or reduced to rubber stamps, and that reopens the very accountability gaps that these frameworks intend to close. Ultimately, what I am considering is: does allowing AI to make the decision on whether humans should live or die respect human dignity?
Autonomous weapons make war more likely, not less.
For proponents of AI, autonomous weapons are appealing. They are seen as a way to limit the human cost of conflict. If it’s robots killing robots, it’s easy to argue that we’re reducing loss of life, and therefore that could be seen as a good thing.
But we need to be careful of what we unleash. Every war has a political cost – part of which is loss of life – and that exerts a downward pressure on the appetite for conflict. If you eliminate that, it lowers the cost of entering wars or of prolonging conflict. In the end, autonomous weapons make protracted wars more likely, not less.
The conversation is happening at intergovernmental but also at departmental level.
In the US, the political debate on autonomous weapons has slowed. Certain elements of the Department of War are very concerned about autonomous weapons, though it seems to me that some political leaders don’t seem as concerned. But I think the US can, and should, shape the debate internationally.
I think the current administration could facilitate these conversations, and we need to have this debate to avoid a race to the bottom. Take nuclear weapons – if we didn’t have these global conversations, it would have spiralled. I don’t see this changing given the current political climate, and of course, we have to square that with the fact we want to be competitive with other powers like China or Russia. But I remain hopeful because the US military in general is concerned about these issues.
And I think the UK also has a role to play. The US and the UK are both well-positioned to influence this debate going forward. We have a strong diplomatic relationship, and we ought to leverage that to shape this debate before our adversaries beat us to it.
This is as much a philosophical question as a policy question.
Policy is a pragmatic application to a particular problem. The philosophical debate ought to inform the policy debate. But the normative questions about technology are significantly outpaced by the question of whether or not we can do it.
Our previous experience with social media tells us our initial perceptions on the benefits of a technology can be wrong. Social media was going to connect us, bring us together, give voices to those who never had one. But turns out it compromised our agreement on facts, promoted authoritarian tendencies, and undermined democratic government.
At least this time we’re thinking ahead – not just about artificial general intelligence or super intelligence, but also about the impact of disinformation, biased datasets, and the like. But we desperately need to make philosophical conversations relevant to policy people.
These conversations give me hope.
How optimistic should we be about our ability to curb the human instinct to build it because we can, rather than because we should? I’m optimistic. History is littered with examples where we’ve flown close to the line but averted disaster.
I hope that we can do it this time again, and that our continued quest for technology development is not what gets us unstuck. AI is a paradigm shift as it’s the first time we’re not able to explain what we’ve created. We need to have more of these conversations to make sure it doesn’t get out of hand.


