AI policy leaders’ series: Mark Brakel, Global Director of Policy at the Future of Life Institute
The Future of Life Institute is concerned with steering transformative technology towards benefiting humanity and away from large-scale risks. It is the originator of the landmark Asilomar AI governance principles, and is best known to the public for its open letters endorsed by A-list public personalities.
We caught up with Mark Brakel, who leads FLI’s global policy on US, European, multilateral and military AI. He is a former policy advisor and diplomat for the Dutch government.
How would you characterise the AI policy debate at the moment?
We’ve seen similar dynamics with past technologies that brought benefits but also had societal implications. Take oil, cigarettes, nuclear. People were initially excited. And then we had to have some serious conversations.
I view AI through the same lens. On one hand, there is a growing body of evidence on how we might use it. But in parallel there is also a growing body of evidence on the risks. And there is a growing industrial lobby whose purpose is to manage the risk to business.
It’s no different to the 1970s with climate change. You couldn’t find a single scientist at the time who would deny it. But still we ran into trouble. Same for social media, the companies were aware of the downsides for women and girls a long time before the public.
The corporates are bringing in all this talent. Ben Buchanan, who was President Biden’s Special Advisor on AI. Elizabeth Kelly, the former Director of the National Institute of Standards and Technology, which houses the US AI Safety Institute. And Rishi Sunak of course.
Let’s learn the lessons for AI, and let’s not fall into the same traps as we did with social media. It’s useful that we had that experience – but let’s be clear, the power of change with AI is more seismic.
Do you worry that the lack of sophisticated technical knowledge we witnessed among legislators and policy makers on social media is inevitably going to be an issue?
Not necessarily. I’ve been impressed by the New York Assemblymember Alex Bores [now standing for Congress], and by conversations with the MEP Dragos Tudorache [now part of the European Commission] in the context of the EU AI Act. Josephine Teo, the Digital Minister in Singapore, is another I’ve been impressed by.
These are some of the most knowledgeable people I’ve had the opportunity to speak to. I’m not saying I always agree with them, but you can have a sophisticated policy conversation with them.
How do you navigate the tension in AI advocacy between philosophical considerations and relevance in the policy sphere?
I would not frame it as a tension. If you’re worried about loss of control, or if you think there should be international red lines on certain standards, then those are not just abstract concerns.
They lead directly into policy questions. If there is a moratorium on superintelligence, for instance, what would you actually do? How would you govern automatic R&D? What rules would you set around the use of biological data? Those are practical policy problems, even if they originate in deeper questions about risk and responsibility.
There is also a media conversation here, which can make things seem more abstract or polarised than they are. But in practice, many organisations are trying to bridge that gap.
What does that look like in practice?
The Future of Life Institute is probably best known for its open letters – the most recent one being our pro-human declaration, and previously the moratorium on superintelligence and on “slaughterbots”. That’s the public side of what we do, but it’s not the only thing we do. My team will bring policy thinking into these debates, but the point is not to separate policy from philosophy. It is to connect them.
And we’re not alone. If you look, for example, at India’s new AI guidelines, there is an effort to link big-picture principles to concrete issues like making sure education works in different languages. That is an example of a document that talks about both values and implementation.
You see something similar in the EU AI Act. I’m not saying these are perfect by the way. In many ways I am not supportive of these specific approaches. But whatever the flaws, its code of practice in particular is an impressive example of how to address extreme risks with practical measures.
Why do you think political decision makers seem so reluctant to make big decisions on making AI safe, in a way their forebears were not for, say, aviation or nuclear?
Tech sometimes behaves as though it should be exempt from the kinds of constraints every other high-impact industry accepts. Silicon Valley has carved out a niche in which regulation is often treated as uniquely suspect, but compared to every other industry, tech is the outlier. In pharmaceuticals or aviation, for example, regulation is not assumed to inhibit innovation. It is part of how trust, legitimacy, and safety are built. The same should be true in AI.
And history shows why this matters. Think about the fall in nuclear demand after Chernobyl and Fukushima. When risks are not governed credibly, public trust collapses and the whole sector can suffer. So good advocacy is not about floating above policy in the philosophical realm. It is about taking the philosophical concerns seriously enough to turn them into workable governance.
You’re pretty unique in the AI policy space in using video on LinkedIn to convey your messages – where did that come from?
Ha, it’s not that complicated really. The overwhelming majority of people talking about this issue are with companies or government and can’t speak openly. I wanted to take it to a larger policy audience.
I used to be a diplomat in Baghdad, and regularly used video as a medium for communication. I felt this might be the time to revive it!



