AI policy leaders’ series: Alexandru Voica, head of corporate affairs and policy, Synthesia
We speak to Alexandru Voica at AI video platform Synthesia on his route into AI policy, current approaches to AI regulation and why we need to begin prioritising opportunity over risk.
My route into AI policy was somewhat unusual
I originally studied engineering and computer hardware, and I began my career as an engineer. At the same time, I always had a strong interest in the humanities and political debate.
That interest grew beyond a hobby. Noticing how engineers often criticised policy and communications, I decided to try it myself as a calculated risk. My plan was to test this path for a year, knowing I could return to engineering if it didn’t work out.
That was about 15 years ago now. I initially worked in communications, but over time, my role expanded into policy.
At Synthesia, I help people understand AI developments. While much attention is given to large language models, generative AI technology is evolving toward video, world models and agentic systems. We develop audio and video technologies, and now work with many of the Fortune 500.
Two approaches to regulation
There are broadly two ways to develop regulation. One is a rules-based approach, and the other is an outcomes-based approach.
The UK has succeeded in some ways by favouring outcomes-based regulation, defining goals, such as preventing deepfake harms, but letting companies find the best solutions themselves.
By contrast, the EU has adopted a more rules-based, prescriptive approach. It is a large horizontal framework that tries to cover product safety and many other aspects of AI. The difficulty is that such a framework can become very complex and difficult to manage in practice.
In addition, AI technology evolves rapidly, often changing within weeks. This speed makes it difficult for prescriptive regulation to keep up, since such rules can quickly become outdated as technology progresses beyond their initial scope.
History shows that what is available today may not be relevant tomorrow. So, a flexible approach focused on containing misuse is often more practical than trying to anticipate every future development.
The current system is broadly fit for purpose. Legal updates, such as the Crime and Policing Bill, show progress, but ongoing caution is needed as technology evolves.
Risk over opportunity
Across Europe, including the UK, policy discussions in both public and private sectors often prioritise risks over opportunities.
There is sometimes a tendency to move toward extremes, with some denying risks and others focusing solely on them.
In the public sector, processes can become lengthy and bureaucratic. Procurement often involves dozens of pages of risk assessments and fragmented steps across departments that do not communicate with one another. This can lead to situations where the social impact of a startup is treated the same as that of a large business, which is not always sensible.
In the private sector, companies recognise AI’s opportunities but disconnects often arise between board-level intent and actual implementation. Boards may advocate for AI use, yet strategic adoption rarely follows. Instead, smaller experiments typically occur at lower organisational levels.
Many of these experiments fail, and AI is sometimes used only for small tasks such as drafting emails rather than being adopted more deeply. As a result, companies do not always fully leverage the technology.
Instead of proving AI’s use, organisations should prioritise outcomes, focusing on utility and value rather than impressive but ineffective demonstrations.
Key regulatory concerns
An example often discussed in this context is Romania’s industrial policy approach in the 1990s. At the time, Romania was facing a deep recession following the collapse of the Soviet system. The country had been heavily industrialised, and many industries collapsed, leading to high unemployment.
In response, the government introduced measures designed to encourage technical talent. For example, people who studied engineering and worked in engineering roles in companies could benefit from paying almost no income tax. This helped motivate young people to study engineering and contributed to the development of a startup ecosystem.
Measures like this can change behaviour by creating incentives for people to enter technical fields. Similar approaches have been used in countries such as Denmark and France.
In the UK, there can also be a perception problem. London is sometimes portrayed internationally as unsafe or declining, with narratives about crime and social problems. In many cases, this perception does not align with reality, yet it can still shape how people view the UK as a place to live and work.
To support AI companies, the government should focus on talent and skills. Incentives attract workers and help graduates launch technology startups. Technical founders are especially important in this ecosystem.
UK startups often have strong technical teams but struggle to scale, particularly because they cannot offer senior executives with relevant scaling experience the same salaries as U.S. companies.
Effective financial incentives can strengthen the ecosystem by encouraging desired behaviours. Countries like Romania, Denmark, and France have adopted policies to attract talent and foster innovation, offering potential lessons for the UK.



