AI policy leaders’ series: Christabel Randolph, Associate Director at the Center for AI and Digital Policy
Christabel Randolph on the evolution of global AI policy and the shared principles guiding governments worldwide.
The Center for AI and Digital Policy (CAIDP) aims to ensure that artificial intelligence and digital policies promote a better society where technology promotes broad social inclusion based on fundamental rights, democratic institutions, and the rule of law. We spoke to Christabel about how governments are developing AI policy, the evolution of AI policy, the importance of fundamental principles, and the role that the CAIDP plays in helping promote the development of AI policy that benefits people.
Governments worldwide are engaged in determining how to govern and regulate AI
However, what stands out is that across countries, the development of AI policy has been more evolutionary than driven by friction. Policymakers are often not working at cross purposes. Instead, they are building on shared foundations and principles as the technology continues to advance.
A significant milestone was reached in 2018. More than 300 experts in over 40 countries endorsed the Universal Guidelines for AI (UGAI) announced at the 2018 International Data Protection and Privacy Commissioners Conference. These guidelines established the importance of baseline principles or guardrails such as fairness, transparency, accuracy and accountability. Institutions such as the OECD and UNESCO later incorporated these principles in their own frameworks. In addition, some 193 countries subsequently adopted the UNESCO Recommendation on Ethics of AI. More recently, declarations such as those agreed upon at Bletchley Park and Seoul have reinforced these foundational principles.
The purpose of the principles in UGAI was and is to establish a baseline, or a floor, that applies across different legal systems and stages of AI development. So, whether used in the United States, which is home to the world’s largest technology companies, or in India, home to a growing base of tech talent, or in countries still building AI infrastructure, the core concerns remain the same.
Policymakers globally are often working with familiar and consistent principles
While individual countries may choose to prioritise the principles in a different order and may decide to focus on different sectors or use cases, they are fundamentally working within the same core guidance.
For example, the African Union has made significant strides in AI governance. The Union has adopted a continental strategy that addresses AI governance to maximise benefits and minimise risks through data governance, education/capabilities, investment, and democratic accountability. It sets clear priorities and pushes for convergence across member states and international cooperation. Meanwhile, Saudi Arabia’s AI Ethics Principles (2023) has a national emphasis but remains an example of continuity, reflecting many of the principles first set out in 2018.
One of the biggest challenges comes from the way commercial interests compete for influence over policy
Companies seek favourable legislation. We see it in the case of taxation or regulatory oversight. This is not unusual; it is how they operate. However, it does create friction between different regions.
We often hear that one country’s approach is superior, and that others should follow the U.S. or the EU. This competition plays out in measurable ways. The CAIDP Index, the first global survey of trustworthy AI, looks at 80 countries and shows how AI policies range from broad documents to binding regulations. The CAIDP Index is specifically aligned with human rights instruments and assesses each country’s implementation. The challenge for governments is to ensure the pull of commercial interests and AI policy that serves broader public interest goals.
The rapid commercialisation of generative AI has introduced new risks and raised fresh questions
Policy and technical approaches across countries converge on accuracy, reliability, transparency and safety. For instance, China’s recent Global AI Governance Action Plan, announced shortly after the U.S., places strong emphasis on AI integration into trade and industry, supported by high-quality datasets, AI guardrails, and sustainability goals. This is similar to other jurisdictions.
However, while no one disagrees about the importance of AI safety, there is debate on what fairness or bias means in practice. The White House, across successive administrations, has reaffirmed fairness as a principle, but translating it into binding evaluation standards remains to be enforced.
Commercial pressures complicate this picture
In the United States, for instance, there is no federal data protection agency, yet the same companies comply with stricter regimes in China, the EU, and India. This shows that regulation does not prevent firms from operating profitably.
For policymakers, the challenge is to ensure that baseline safeguards are in place while also using their own comparative strengths.
For developing countries, this might involve favouring regulatory sandboxes or investment incentives. But these flexible approaches still need to be grounded in strong protections for rights and fundamental freedoms.
The CAIDP plays a central role in building expertise and spreading consensus around such protections
One of our main initiatives is our AI Policy Clinics. These clinics began with just 20 participants and have since grown exponentially with each cohort. More than 300 people have enrolled in the Fall 2025 cohort. Over the last five years, CAIDP has trained more than 1,500 civil society advocates, policymakers and practitioners, rights defenders, lawyers, technologists and academics. The alumni network covers more than 120 countries.
Participants gain a deep understanding of AI governance and how countries are making progress. Many alumni have gone on to policy positions, helping to embed baseline safeguards and principles in practice. Multiple countries now reference CAIDP’s AI governance recommendations in official policy and guidance.
We also publish the CAIDP Index, offering independent analysis of national strategies and tracking global shifts in AI policy. Beyond research, CAIDP is now leading global efforts for the ratification of the International AI treaty grounded in human rights and the rule of law.
CAIDP expertise has improved global standards for AI accountability, influencing both policy development and implementation of guardrails, including bans and controls on mass biometric surveillance. Most recently, CAIDP’s advocacy led OECD to adopt a new definition of privacy-enhancing technologies (PETs).
Overall, we aim to demonstrate that countries and governments do not need to rewrite the principles and guidelines on AI governance. The real urgency lies in implementation and oversight. With the EU AI Act now coming into force, the focus must shift to ensuring it works in practice and inspires similar action elsewhere.



