In conversation with Isabela Parisio, postdoctoral research associate at Responsible AI UK
Isabela Parisio on regulatory sandboxes, asymmetries of information in the regulatory process and increasing participation in policy
Isabela is a postdoctoral research associate at King’s College London and Responsible AI UK, working at the intersection of law, policy, and emerging technologies. Originally trained as a lawyer, she has a background in administrative law and government. Here, we speak to Isabela about her entry into AI policy and her work at RAI.
From administrative law to AI policy
Isabela’s interest in artificial intelligence began with initial exposure to automation and digital advertising tools in professional environments, long before she recognised them as AI systems. This led her to the Center for AI and Digital Policy (CAIDP) in the US, where she worked as a research assistant and policy analyst.
At the time, discussions around the EU AI Act were gaining momentum, whereas regulatory approaches in the United States were shifting under different political administrations. These experiences directed her move into AI governance and policy, which Isabela describes as fast-paced and collaborative.
“Now a large part of my work involves converting complex research into language that policymakers, regulators and industry actors can use,” explains Isabela.
At King’s College London, Isabela’s research is funded through Responsible AI UK, a programme that connects academics, industry and policymakers. The organisation supports interdisciplinary work across technical and sociotechnical questions, from large language models to governance frameworks.
“My legal expertise helps contribute to questions about how existing laws should be interpreted in relation to AI systems and how regulatory criteria can be operationalised in practice,” says Isabela.
Building a regulatory sandbox
One of Isabela’s main projects is a regulatory AI sandbox developed through an international collaboration between the UK and India.
The project, led by early-career researchers, began in September 2024. It focuses on what Isabela describes as the “implementation gap” in AI regulation.
As Isabela explains, “many policy guidelines set out high-level principles such as accountability or transparency. However, they often provide limited guidance on how developers and deployers should apply them. So, organisations can often interpret the same principles differently and regulators face challenges assessing compliance.”
The sandbox draws inspiration from financial services regulatory sandboxes, where regulated environments allow innovators to test new technologies while regulators observe their impact.
So far, the team working on the project has created hypothetical regulatory language based on comparative research, including the EU AI Act, Singapore’s Veritas initiative, and literature on fintech sandboxes.
Next, computer scientists and engineers are to present a technical model to the policy team, fostering a structured dialogue on topics such as accuracy, explainability, and measurement standards.
Highlighting asymmetries
“This joint process has highlighted an ongoing asymmetry of information between developers and regulators,” says Isabela.
By simulating regulatory decision-making, the project examines how rules might be designed, tested and enforced.
The work should follow an implementation phase, where participants will tackle practical questions about whether models are accurate or compliant. Running for approximately 18 months, the sandbox is intended as a pilot for prospective initiatives.
Alongside research, Isabela contributes to Responsible AI UK’s policy engagement activities.
These include public events, workshops, town halls, and responses to government and parliament calls for evidence on issues such as AI governance and copyright. “Translating academic research into implementable policy recommendations is still a difficult but necessary task, as evidence-based input strengthens both the quality of regulation and democratic participation,” notes Isabela.
Looking ahead
Isabela identifies two priorities for AI policy.
“The first is standardisation, particularly the development of shared policy frameworks that help regulators and developers interpret principles consistently.”
“The second is public participation, making sure that policy debates include wider societal perspectives rather than remaining confined to technical or institutional actors.”
With Isabela’s ongoing work at RAI, she is already helping do both.



