AI policy leaders’ series: Alisar Mustafa, Head of AI Policy & Safety at Duco
Alisar is a seasoned expert in AI policy and regulation with over a decade of experience navigating the intersections of technology, ethics, and governance. At Duco, Alisar helps companies move from high-level principles to real implementation. Her work focuses on translating laws and frameworks into system design requirements, risk controls, and mitigation strategies at scale. She also writes a weekly AI Policy Newsletter.
AI is still evolving, and it’s high-stakes
AI policy matters because we’re regulating a technology that’s still evolving, and the stakes couldn’t be higher. Regulation and innovation aren’t opposites: strong policy creates trust and prevents the kinds of harms that could stall progress altogether. The real competition should be about who can build the best AI that benefits people while also preventing harm.
Those harms aren’t theoretical. The Internet Watch Foundation found just two AI-generated Child Sexual Abuse Material videos last year, for instance. This year, the organisation has confirmed nearly 1,300 so far and counting. So, the company that figures out how to innovate while preventing these harms will set the model others follow. As Anthropic’s CEO Dario Amodei put it, we need a race to the top, not a race to the bottom.
Translating principles into practice
One of the biggest problems in AI policy today is the lack of technical implementation guidance. Principles like “fairness” and “minimising harm” are essential, but without clear definitions and real-world constraints, they don’t translate into safer systems. In practice, everything starts with the data. For example, one project I’ve been leading at Duco involves fine-tuning models using human-generated data on high-risk topics across low-resource languages—areas where models are most likely to break.These topics evolve fast, vary across regions, and carry serious risks of bias, misinformation, and harm.
We work with global experts to define critical issues and generate prompts and responses that reflect multiple perspectives and prioritise factual accuracy. As AI expands into low-resource markets, this kind of targeted data becomes even more important. If we want AI to be safe and aligned, policymakers need to provide clear technical pathways, including data standards, alongside clear outcomes.
Building safety from the start
Good regulation compels companies to assess the safety and risks of their systems. For instance, under the EU’s Digital Services Act, companies must measure harms, engage researchers, and build internal systems. When done right, regulation doesn’t just enforce compliance, it drives meaningful investment in safety.
That’s why safety research has to be built in from the beginning. In practice, this means tying obligations to measurable artefacts such as model cards, data statements, evaluation logs, and post-deployment incident records. These give regulators something concrete to assess.
However, governments also need to invest in public infrastructure by publishing reference tests, releasing red-team scenarios for known harms, and ensuring these reflect low-resource and multilingual contexts. Alongside this, governments should consider creating safe harbours that protect companies that disclose failures in good faith. Without that, we’ll never get honest reporting.
The EU AI Act’s Code of Practice exemplifies the technically-informed policy approach I advocate. The Code moves beyond vague principles to provide concrete implementation guidance through measurable artefacts, such as model cards, evaluation logs, and documented risk mitigation strategies.
Bridging policy and practice
Duco stands apart by providing organisations with solutions that directly bridge policy and practice. Our team brings deep technical knowledge and regulatory expertise to deliver AI Adversarial Monitoring & Red-Teaming, AI Training & Fine-Tuning for high-risk use cases, and custom Safety Evaluation Datasets. These services are designed to operationalise complex regulations efficiently, helping organisations not just comply but improve the safety, reliability, and global readiness of their AI systems.
In addition to technical implementation, Duco uniquely guides organisations in navigating the global regulatory landscape. We work closely with leading tech companies to analyse cross-jurisdictional regulatory requirements—such as US federal and state differences, EU directives, and APAC compliance. Our integrated strategy ensures clients not only keep pace but also gain a competitive edge in global markets by aligning compliance with business objectives and sustainable market access.



