<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Appraise Network]]></title><description><![CDATA[We founded Advocacy and Policy Professionals for a Responsible Artificial Intelligence Sector (APPRAISE) in response to the growing public and government interest in AI – and the challenges we’ll need to overcome as a society.]]></description><link>https://www.appraisenetwork.ai</link><generator>Substack</generator><lastBuildDate>Mon, 27 Apr 2026 12:18:01 GMT</lastBuildDate><atom:link href="https://www.appraisenetwork.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[The Appraise Network]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[theappraisenetwork@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[theappraisenetwork@substack.com]]></itunes:email><itunes:name><![CDATA[Appraise]]></itunes:name></itunes:owner><itunes:author><![CDATA[Appraise]]></itunes:author><googleplay:owner><![CDATA[theappraisenetwork@substack.com]]></googleplay:owner><googleplay:email><![CDATA[theappraisenetwork@substack.com]]></googleplay:email><googleplay:author><![CDATA[Appraise]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[AI policy leaders’ series: Mark Brakel, Global Director of Policy at the Future of Life Institute]]></title><description><![CDATA[The Future of Life Institute is concerned with steering transformative technology towards benefiting humanity and away from large-scale risks.]]></description><link>https://www.appraisenetwork.ai/p/ai-policy-leaders-series-mark-brakel</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/ai-policy-leaders-series-mark-brakel</guid><dc:creator><![CDATA[Aidan Muller]]></dc:creator><pubDate>Wed, 01 Apr 2026 08:48:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UXS1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e459771-6cea-40c7-b9a0-73fc4868dcc0_477x477.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em><strong>The Future of Life Institute is concerned with steering transformative technology towards benefiting humanity and away from large-scale risks. It is the originator of the landmark Asilomar AI governance principles, and is best known to the public for its open letters endorsed by A-list public personalities.</strong></em></p><p><em>We caught up with <strong>Mark Brakel,</strong> who leads FLI&#8217;s global policy on US, European, multilateral and military AI. He is a former policy advisor and diplomat for the Dutch government.</em></p><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UXS1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e459771-6cea-40c7-b9a0-73fc4868dcc0_477x477.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UXS1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e459771-6cea-40c7-b9a0-73fc4868dcc0_477x477.png 424w, https://substackcdn.com/image/fetch/$s_!UXS1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e459771-6cea-40c7-b9a0-73fc4868dcc0_477x477.png 848w, https://substackcdn.com/image/fetch/$s_!UXS1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e459771-6cea-40c7-b9a0-73fc4868dcc0_477x477.png 1272w, https://substackcdn.com/image/fetch/$s_!UXS1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e459771-6cea-40c7-b9a0-73fc4868dcc0_477x477.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UXS1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e459771-6cea-40c7-b9a0-73fc4868dcc0_477x477.png" width="353" height="353" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9e459771-6cea-40c7-b9a0-73fc4868dcc0_477x477.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:477,&quot;width&quot;:477,&quot;resizeWidth&quot;:353,&quot;bytes&quot;:336587,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.appraisenetwork.ai/i/190733137?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e459771-6cea-40c7-b9a0-73fc4868dcc0_477x477.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UXS1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e459771-6cea-40c7-b9a0-73fc4868dcc0_477x477.png 424w, https://substackcdn.com/image/fetch/$s_!UXS1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e459771-6cea-40c7-b9a0-73fc4868dcc0_477x477.png 848w, https://substackcdn.com/image/fetch/$s_!UXS1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e459771-6cea-40c7-b9a0-73fc4868dcc0_477x477.png 1272w, https://substackcdn.com/image/fetch/$s_!UXS1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e459771-6cea-40c7-b9a0-73fc4868dcc0_477x477.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p><strong>How would you characterise the AI policy debate at the moment?</strong></p><p>We&#8217;ve seen similar dynamics with past technologies that brought benefits but also had societal implications. Take oil, cigarettes, nuclear. People were initially excited. And then we had to have some serious conversations.</p><p>I view AI through the same lens. On one hand, there is a growing body of evidence on how we might use it. But in parallel there is also a growing body of evidence on the risks. And there is a growing industrial lobby whose purpose is to manage the risk to business.</p><p>It&#8217;s no different to the 1970s with climate change. You couldn&#8217;t find a single scientist at the time who would deny it. But still we ran into trouble. Same for social media, the companies were aware of the downsides for women and girls a long time before the public.</p><p>The corporates are bringing in all this talent. Ben Buchanan, who was President Biden&#8217;s Special Advisor on AI. Elizabeth Kelly, the former Director of the National Institute of Standards and Technology, which houses the US AI Safety Institute. And Rishi Sunak of course.</p><p>Let&#8217;s learn the lessons for AI, and let&#8217;s not fall into the same traps as we did with social media. It&#8217;s useful that we had that experience &#8211; but let&#8217;s be clear, the power of change with AI is more seismic.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.appraisenetwork.ai/subscribe?"><span>Subscribe now</span></a></p><p></p><p><strong>Do you worry that the lack of sophisticated technical knowledge we witnessed among legislators and policy makers on social media is inevitably going to be an issue?</strong></p><p>Not necessarily. I&#8217;ve been impressed by the New York Assemblymember Alex Bores [now standing for Congress], and by conversations with the MEP Dragos Tudorache [now part of the European Commission] in the context of the EU AI Act. Josephine Teo, the Digital Minister in Singapore, is another I&#8217;ve been impressed by.</p><p>These are some of the most knowledgeable people I&#8217;ve had the opportunity to speak to. I&#8217;m not saying I always agree with them, but you can have a sophisticated policy conversation with them.</p><p></p><p><strong>How do you navigate the tension in AI advocacy between philosophical considerations and relevance in the policy sphere?</strong></p><p>I would not frame it as a tension. If you&#8217;re worried about loss of control, or if you think there should be international red lines on certain standards, then those are not just abstract concerns.</p><p>They lead directly into policy questions. If there is a moratorium on superintelligence, for instance, what would you actually do? How would you govern automatic R&amp;D? What rules would you set around the use of biological data? Those are practical policy problems, even if they originate in deeper questions about risk and responsibility.</p><p>There is also a media conversation here, which can make things seem more abstract or polarised than they are. But in practice, many organisations are trying to bridge that gap.</p><p></p><p><strong>What does that look like in practice?</strong></p><p>The Future of Life Institute is probably best known for its open letters &#8211; the most recent one being our pro-human declaration, and previously the moratorium on superintelligence and on &#8220;slaughterbots&#8221;. That&#8217;s the public side of what we do, but it&#8217;s not the only thing we do. My team will bring policy thinking into these debates, but the point is not to separate policy from philosophy. It is to connect them.</p><p>And we&#8217;re not alone. If you look, for example, at India&#8217;s new AI guidelines, there is an effort to link big-picture principles to concrete issues like making sure education works in different languages. That is an example of a document that talks about both values and implementation.</p><p>You see something similar in the EU AI Act. I&#8217;m not saying these are perfect by the way. In many ways I am not supportive of these specific approaches. But whatever the flaws, its code of practice in particular is an impressive example of how to address extreme risks with practical measures.</p><p></p><p><strong>Why do you think political decision makers seem so reluctant to make big decisions on making AI safe, in a way their forebears were not for, say, aviation or nuclear?</strong></p><p>Tech sometimes behaves as though it should be exempt from the kinds of constraints every other high-impact industry accepts. Silicon Valley has carved out a niche in which regulation is often treated as uniquely suspect, but compared to every other industry, tech is the outlier. In pharmaceuticals or aviation, for example, regulation is not assumed to inhibit innovation. It is part of how trust, legitimacy, and safety are built. The same should be true in AI.</p><p>And history shows why this matters. Think about the fall in nuclear demand after Chernobyl and Fukushima. When risks are not governed credibly, public trust collapses and the whole sector can suffer. So good advocacy is not about floating above policy in the philosophical realm. It is about taking the philosophical concerns seriously enough to turn them into workable governance.</p><p></p><p><strong>You&#8217;re pretty unique in the AI policy space in using video on LinkedIn to convey your messages &#8211; where did that come from?</strong></p><p>Ha, it&#8217;s not that complicated really. The overwhelming majority of people talking about this issue are with companies or government and can&#8217;t speak openly. I wanted to take it to a larger policy audience.</p><p>I used to be a diplomat in Baghdad, and regularly used video as a medium for communication. I felt this might be the time to revive it!</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Appraise Network! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[AI policy leaders’ series: Alexandru Voica, head of corporate affairs and policy, Synthesia]]></title><description><![CDATA[We speak to Alexandru Voica at AI video platform Synthesia on his route into AI policy, current approaches to AI regulation and why we need to begin prioritising opportunity over risk.]]></description><link>https://www.appraisenetwork.ai/p/ai-policy-leaders-series-alexandru</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/ai-policy-leaders-series-alexandru</guid><dc:creator><![CDATA[James Boyd-Wallis]]></dc:creator><pubDate>Thu, 12 Mar 2026 08:58:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!nMmb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654d7f73-53b4-48f8-87f9-e60b6cb7c0e9_320x320.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>We speak to Alexandru Voica at AI video platform Synthesia on his route into AI policy, current approaches to AI regulation and why we need to begin prioritising opportunity over risk.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nMmb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654d7f73-53b4-48f8-87f9-e60b6cb7c0e9_320x320.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nMmb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654d7f73-53b4-48f8-87f9-e60b6cb7c0e9_320x320.webp 424w, https://substackcdn.com/image/fetch/$s_!nMmb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654d7f73-53b4-48f8-87f9-e60b6cb7c0e9_320x320.webp 848w, https://substackcdn.com/image/fetch/$s_!nMmb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654d7f73-53b4-48f8-87f9-e60b6cb7c0e9_320x320.webp 1272w, https://substackcdn.com/image/fetch/$s_!nMmb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654d7f73-53b4-48f8-87f9-e60b6cb7c0e9_320x320.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nMmb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654d7f73-53b4-48f8-87f9-e60b6cb7c0e9_320x320.webp" width="244" height="244" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/654d7f73-53b4-48f8-87f9-e60b6cb7c0e9_320x320.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:320,&quot;width&quot;:320,&quot;resizeWidth&quot;:244,&quot;bytes&quot;:10070,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.appraisenetwork.ai/i/190635097?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654d7f73-53b4-48f8-87f9-e60b6cb7c0e9_320x320.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nMmb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654d7f73-53b4-48f8-87f9-e60b6cb7c0e9_320x320.webp 424w, https://substackcdn.com/image/fetch/$s_!nMmb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654d7f73-53b4-48f8-87f9-e60b6cb7c0e9_320x320.webp 848w, https://substackcdn.com/image/fetch/$s_!nMmb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654d7f73-53b4-48f8-87f9-e60b6cb7c0e9_320x320.webp 1272w, https://substackcdn.com/image/fetch/$s_!nMmb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F654d7f73-53b4-48f8-87f9-e60b6cb7c0e9_320x320.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Appraise Network! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong>My route into AI policy was somewhat unusual</strong></p><p><em>I originally studied engineering and computer hardware, and I began my career as an engineer. At the same time, I always had a strong interest in the humanities and political debate.</em></p><p><em>That interest grew beyond a hobby. Noticing how engineers often criticised policy and communications, I decided to try it myself as a calculated risk. My plan was to test this path for a year, knowing I could return to engineering if it didn&#8217;t work out.</em></p><p><em>That was about 15 years ago now. I initially worked in communications, but over time, my role expanded into policy.</em></p><p><em>At Synthesia, I help people understand AI developments. While much attention is given to large language models, generative AI technology is evolving toward video, world models and agentic systems. We develop audio and video technologies, and now work with many of the Fortune 500.</em></p><p><strong>Two approaches to regulation</strong></p><p><em>There are broadly two ways to develop regulation. One is a rules-based approach, and the other is an outcomes-based approach.</em></p><p><em>The UK has succeeded in some ways by favouring outcomes-based regulation, defining goals, such as preventing deepfake harms, but letting companies find the best solutions themselves.</em></p><p><em>By contrast, the EU has adopted a more rules-based, prescriptive approach. It is a large horizontal framework that tries to cover product safety and many other aspects of AI. The difficulty is that such a framework can become very complex and difficult to manage in practice.</em></p><p><em>In addition, AI technology evolves rapidly, often changing within weeks. This speed makes it difficult for prescriptive regulation to keep up, since such rules can quickly become outdated as technology progresses beyond their initial scope.</em></p><p><em>History shows that what is available today may not be relevant tomorrow. So, a flexible approach focused on containing misuse is often more practical than trying to anticipate every future development.</em></p><p><em>The current system is broadly fit for purpose. Legal updates, such as the Crime and Policing Bill, show progress, but ongoing caution is needed as technology evolves.</em></p><p><strong>Risk over opportunity</strong></p><p><em>Across Europe, including the UK, policy discussions in both public and private sectors often prioritise risks over opportunities.</em></p><p><em>There is sometimes a tendency to move toward extremes, with some denying risks and others focusing solely on them.</em></p><p><em>In the public sector, processes can become lengthy and bureaucratic. Procurement often involves dozens of pages of risk assessments and fragmented steps across departments that do not communicate with one another. This can lead to situations where the social impact of a startup is treated the same as that of a large business, which is not always sensible.</em></p><p><em>In the private sector, companies recognise AI&#8217;s opportunities but disconnects often arise between board-level intent and actual implementation. Boards may advocate for AI use, yet strategic adoption rarely follows. Instead, smaller experiments typically occur at lower organisational levels.</em></p><p><em>Many of these experiments fail, and AI is sometimes used only for small tasks such as drafting emails rather than being adopted more deeply. As a result, companies do not always fully leverage the technology.</em></p><p><em>Instead of proving AI&#8217;s use, organisations should prioritise outcomes, focusing on utility and value rather than impressive but ineffective demonstrations.</em></p><p><strong>Key regulatory concerns</strong></p><p><em>An example often discussed in this context is Romania&#8217;s industrial policy approach in the 1990s. At the time, Romania was facing a deep recession following the collapse of the Soviet system. The country had been heavily industrialised, and many industries collapsed, leading to high unemployment.</em></p><p><em>In response, the government introduced measures designed to encourage technical talent. For example, people who studied engineering and worked in engineering roles in companies could benefit from paying almost no income tax. This helped motivate young people to study engineering and contributed to the development of a startup ecosystem.</em></p><p><em>Measures like this can change behaviour by creating incentives for people to enter technical fields. Similar approaches have been used in countries such as Denmark and France.</em></p><p><em>In the UK, there can also be a perception problem. London is sometimes portrayed internationally as unsafe or declining, with narratives about crime and social problems. In many cases, this perception does not align with reality, yet it can still shape how people view the UK as a place to live and work.</em></p><p><em>To support AI companies, the government should focus on talent and skills. Incentives attract workers and help graduates launch technology startups. Technical founders are especially important in this ecosystem.</em></p><p><em>UK startups often have strong technical teams but struggle to scale, particularly because they cannot offer senior executives with relevant scaling experience the same salaries as U.S. companies.</em></p><p><em>Effective financial incentives can strengthen the ecosystem by encouraging desired behaviours. Countries like Romania, Denmark, and France have adopted policies to attract talent and foster innovation, offering potential lessons for the UK.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Appraise Network! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[In conversation with Isabela Parisio, postdoctoral research associate at Responsible AI UK]]></title><description><![CDATA[Isabela Parisio on regulatory sandboxes, asymmetries of information in the regulatory process and increasing participation in policy]]></description><link>https://www.appraisenetwork.ai/p/in-conversation-with-isabela-parisio</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/in-conversation-with-isabela-parisio</guid><dc:creator><![CDATA[James Boyd-Wallis]]></dc:creator><pubDate>Thu, 26 Feb 2026 09:22:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ly4i!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ef75291-15cd-48a6-9024-11856bb521be_797x996.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Isabela is a postdoctoral research associate at King&#8217;s College London and Responsible AI UK, working at the intersection of law, policy, and emerging technologies. Originally trained as a lawyer, she has a background in administrative law and government. Here, we speak to Isabela about her entry into AI policy and her work at RAI.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ly4i!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ef75291-15cd-48a6-9024-11856bb521be_797x996.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ly4i!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ef75291-15cd-48a6-9024-11856bb521be_797x996.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ly4i!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ef75291-15cd-48a6-9024-11856bb521be_797x996.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ly4i!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ef75291-15cd-48a6-9024-11856bb521be_797x996.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ly4i!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ef75291-15cd-48a6-9024-11856bb521be_797x996.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ly4i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ef75291-15cd-48a6-9024-11856bb521be_797x996.jpeg" width="358" height="447.38770388958596" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4ef75291-15cd-48a6-9024-11856bb521be_797x996.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:996,&quot;width&quot;:797,&quot;resizeWidth&quot;:358,&quot;bytes&quot;:63608,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.appraisenetwork.ai/i/189180626?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ef75291-15cd-48a6-9024-11856bb521be_797x996.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ly4i!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ef75291-15cd-48a6-9024-11856bb521be_797x996.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ly4i!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ef75291-15cd-48a6-9024-11856bb521be_797x996.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ly4i!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ef75291-15cd-48a6-9024-11856bb521be_797x996.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ly4i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ef75291-15cd-48a6-9024-11856bb521be_797x996.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Appraise Network! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong>From administrative law to AI policy</strong></p><p>Isabela&#8217;s interest in artificial intelligence began with initial exposure to automation and digital advertising tools in professional environments, long before she recognised them as AI systems. This led her to the Center for AI and Digital Policy (CAIDP) in the US, where she worked as a research assistant and policy analyst.</p><p>At the time, discussions around the EU AI Act were gaining momentum, whereas regulatory approaches in the United States were shifting under different political administrations. These experiences directed her move into AI governance and policy, which Isabela describes as fast-paced and collaborative.</p><p>&#8220;Now a large part of my work involves converting complex research into language that policymakers, regulators and industry actors can use,&#8221; explains Isabela.</p><p>At King&#8217;s College London, Isabela&#8217;s research is funded through Responsible AI UK, a programme that connects academics, industry and policymakers. The organisation supports interdisciplinary work across technical and sociotechnical questions, from large language models to governance frameworks.</p><p>&#8220;My legal expertise helps contribute to questions about how existing laws should be interpreted in relation to AI systems and how regulatory criteria can be operationalised in practice,&#8221; says Isabela.</p><p><strong>Building a regulatory sandbox</strong></p><p>One of Isabela&#8217;s main projects is a regulatory AI sandbox developed through an international collaboration between the UK and India.</p><p>The project, led by early-career researchers, began in September 2024. It focuses on what Isabela describes as the &#8220;implementation gap&#8221; in AI regulation.</p><p>As Isabela explains, &#8220;many policy guidelines set out high-level principles such as accountability or transparency. However, they often provide limited guidance on how developers and deployers should apply them. So, organisations can often interpret the same principles differently and regulators face challenges assessing compliance.&#8221;</p><p>The sandbox draws inspiration from financial services regulatory sandboxes, where regulated environments allow innovators to test new technologies while regulators observe their impact.</p><p>So far, the team working on the project has created hypothetical regulatory language based on comparative research, including the EU AI Act, Singapore&#8217;s Veritas initiative, and literature on fintech sandboxes.</p><p>Next, computer scientists and engineers are to present a technical model to the policy team, fostering a structured dialogue on topics such as accuracy, explainability, and measurement standards.</p><p><strong>Highlighting asymmetries</strong></p><p>&#8220;This joint process has highlighted an ongoing asymmetry of information between developers and regulators,&#8221; says Isabela.</p><p>By simulating regulatory decision-making, the project examines how rules might be designed, tested and enforced.</p><p>The work should follow an implementation phase, where participants will tackle practical questions about whether models are accurate or compliant. Running for approximately 18 months, the sandbox is intended as a pilot for prospective initiatives.</p><p>Alongside research, Isabela contributes to Responsible AI UK&#8217;s policy engagement activities.</p><p>These include public events, workshops, town halls, and responses to government and parliament calls for evidence on issues such as AI governance and copyright. &#8220;Translating academic research into implementable policy recommendations is still a difficult but necessary task, as evidence-based input strengthens both the quality of regulation and democratic participation,&#8221; notes Isabela.</p><p><strong>Looking ahead</strong></p><p>Isabela identifies two priorities for AI policy.</p><p>&#8220;The first is standardisation, particularly the development of shared policy frameworks that help regulators and developers interpret principles consistently.&#8221;</p><p>&#8220;The second is public participation, making sure that policy debates include wider societal perspectives rather than remaining confined to technical or institutional actors.&#8221;</p><p>With Isabela&#8217;s ongoing work at RAI, she is already helping do both.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Appraise Network! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI Founders Survey: A call for AI startups to have their say on AI policy]]></title><description><![CDATA[The Startup Coalition&#8217;s Vinous Ali talks to Appraise about their efforts to map the UK&#8217;s AI ecosystem.]]></description><link>https://www.appraisenetwork.ai/p/ai-founders-survey-a-call-for-ai</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/ai-founders-survey-a-call-for-ai</guid><dc:creator><![CDATA[Aidan Muller]]></dc:creator><pubDate>Mon, 01 Dec 2025 14:40:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!P7hr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf69d3df-fb96-4590-8e67-7f79a2a07c6d_776x860.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>The Startup Coalition is the policy voice for UK startups and scaleups. It focuses on improving the policy environment and ensuring the UK is the best place to start and scale. Vinous Ali is the organisation&#8217;s deputy executive director, and a highly-respected policy analyst well-known in tech circles.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!P7hr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf69d3df-fb96-4590-8e67-7f79a2a07c6d_776x860.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!P7hr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf69d3df-fb96-4590-8e67-7f79a2a07c6d_776x860.jpeg 424w, https://substackcdn.com/image/fetch/$s_!P7hr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf69d3df-fb96-4590-8e67-7f79a2a07c6d_776x860.jpeg 848w, https://substackcdn.com/image/fetch/$s_!P7hr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf69d3df-fb96-4590-8e67-7f79a2a07c6d_776x860.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!P7hr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf69d3df-fb96-4590-8e67-7f79a2a07c6d_776x860.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!P7hr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf69d3df-fb96-4590-8e67-7f79a2a07c6d_776x860.jpeg" width="242" height="268.1958762886598" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/af69d3df-fb96-4590-8e67-7f79a2a07c6d_776x860.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:860,&quot;width&quot;:776,&quot;resizeWidth&quot;:242,&quot;bytes&quot;:87989,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.appraisenetwork.ai/i/180386758?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf69d3df-fb96-4590-8e67-7f79a2a07c6d_776x860.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!P7hr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf69d3df-fb96-4590-8e67-7f79a2a07c6d_776x860.jpeg 424w, https://substackcdn.com/image/fetch/$s_!P7hr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf69d3df-fb96-4590-8e67-7f79a2a07c6d_776x860.jpeg 848w, https://substackcdn.com/image/fetch/$s_!P7hr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf69d3df-fb96-4590-8e67-7f79a2a07c6d_776x860.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!P7hr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf69d3df-fb96-4590-8e67-7f79a2a07c6d_776x860.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p><strong>How has the Startup Coalition approached AI policy and advocacy?</strong></p><p>As an organisation we&#8217;ve always been focused on policy around issues like access to capital, talent, and the broader regulatory environment. There are a lot of players in the AI ecosystem, so we&#8217;re very keen to be focused on specific policy outcomes.</p><p>When the AI hype took over in 2022 following the release of ChatGPT, one of the challenges we found &#8211; due to AI being a general-purpose technology &#8211; is that it was hard to get startups to coalesce behind a single thing.</p><p>This changed in January 2025 with the publication of the AI Opportunities Action Plan. All of a sudden, we had 50 recommendations that the government had committed to, and our role became how can we help them execute on them &#8211; and fast. We&#8217;re starting to see how the money is being allocated, and we can start working out where startups fit in.</p><p><strong>How would you describe the UK&#8217;s AI ecosystem currently?</strong></p><p>The ecosystem is flourishing, particularly in the application layer. Actually, as it happens, we&#8217;re currently working on an AI index, to be published in January, to track the most exciting AI companies in the UK. The plan is to map the 1,000 fastest growing AI startups.</p><p>And alongside that we&#8217;re currently running a survey for AI founders. We&#8217;re interested in finding out what&#8217;s happening on the ground for them. What&#8217;s working and what&#8217;s not? How are they finding the funding environment right now? Where are the bottlenecks? What do they need?</p><p>The more specific, the more unglamorous the better! We want to walk in their shoes, and hear the things that might not make it into a press release but would really make a difference to their ability to grow. We recently made some <a href="https://api.startupcoalition.io/u/2025/11/Hard-to-Compute-Why-Startups-Need-More-Power-2.pdf">recommendations</a> on the need for better AI infrastructure, for example. What else do they need?</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://73naipqix0o.typeform.com/to/h1iMGYGe?typeform-source=startupcoalition.substack.com&quot;,&quot;text&quot;:&quot;COMPLETE THE AI FOUNDERS SURVEY (5mins)&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://73naipqix0o.typeform.com/to/h1iMGYGe?typeform-source=startupcoalition.substack.com"><span>COMPLETE THE AI FOUNDERS SURVEY (5mins)</span></a></p><p></p><p><strong>How would you characterise the AI policy debate in this country?</strong></p><p>As a country, we&#8217;ve taken quite a sensible approach to AI regulation, certainly compared to the EU and its AI Act. Brussels made the bet that they could do GDPR again, become the global gold-standard. But already we&#8217;re seeing them rowing back. Putting process ahead of practice is always likely to trip you up.</p><p>In the UK, we just set out some principles, and then proceeded vertical by vertical. And it looks like this approach has proven to be right. What&#8217;s the problem we&#8217;re trying to solve and then trying to find pragmatic solutions? Like, the AI Growth Lab is a brilliant initiative for example.</p><p>The broader conversations, about existential threats and the like, they don&#8217;t really have any impact on our day-to-day conversations. Maybe they would in a global context, or across multilateral forums. But on the ground there&#8217;s plenty of work to be getting on with, and I&#8217;m happy to focus on that!</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Appraise Network! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Opt-In or Opt-Out? What’s at Stake in Regulating AI Crawling]]></title><description><![CDATA[Which model should the UK adopt for AI training on online content?]]></description><link>https://www.appraisenetwork.ai/p/opt-in-or-opt-out-whats-at-stake</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/opt-in-or-opt-out-whats-at-stake</guid><dc:creator><![CDATA[Audrey Hingle]]></dc:creator><pubDate>Tue, 25 Nov 2025 08:31:06 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fe621357-4be0-4a00-8311-d3636fe441fd_1080x608.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As AI models grow more data-hungry, they increasingly rely on large-scale web crawling to collect text, images and code from across the public internet. For decades, this kind of automated access was guided by voluntary norms like <em><a href="https://www.techpolicy.press/robotstxt-is-having-a-moment-heres-why-we-should-care/">robots.txt</a></em>, a simple file created in the 1990s to tell early search engines which pages they could or could not index.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0z26!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe431d5ca-8130-4ada-b24a-e2a52dd45a4a_1080x608.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0z26!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe431d5ca-8130-4ada-b24a-e2a52dd45a4a_1080x608.jpeg 424w, https://substackcdn.com/image/fetch/$s_!0z26!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe431d5ca-8130-4ada-b24a-e2a52dd45a4a_1080x608.jpeg 848w, https://substackcdn.com/image/fetch/$s_!0z26!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe431d5ca-8130-4ada-b24a-e2a52dd45a4a_1080x608.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!0z26!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe431d5ca-8130-4ada-b24a-e2a52dd45a4a_1080x608.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0z26!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe431d5ca-8130-4ada-b24a-e2a52dd45a4a_1080x608.jpeg" width="1080" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e431d5ca-8130-4ada-b24a-e2a52dd45a4a_1080x608.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:608,&quot;width&quot;:1080,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:201126,&quot;alt&quot;:&quot;An array of colorful, fossil-like data imprints representing the static nature of AI models, laden with outdated contexts and biases.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.appraisenetwork.ai/i/179840615?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe431d5ca-8130-4ada-b24a-e2a52dd45a4a_1080x608.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="An array of colorful, fossil-like data imprints representing the static nature of AI models, laden with outdated contexts and biases." title="An array of colorful, fossil-like data imprints representing the static nature of AI models, laden with outdated contexts and biases." srcset="https://substackcdn.com/image/fetch/$s_!0z26!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe431d5ca-8130-4ada-b24a-e2a52dd45a4a_1080x608.jpeg 424w, https://substackcdn.com/image/fetch/$s_!0z26!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe431d5ca-8130-4ada-b24a-e2a52dd45a4a_1080x608.jpeg 848w, https://substackcdn.com/image/fetch/$s_!0z26!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe431d5ca-8130-4ada-b24a-e2a52dd45a4a_1080x608.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!0z26!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe431d5ca-8130-4ada-b24a-e2a52dd45a4a_1080x608.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Luke Conroy and Anne Fehres &amp; AI4Media / betterimagesofai.org / creativecommons.org/licenses/by/4.0</figcaption></figure></div><p>That system was never designed for today&#8217;s industrial-scale AI scraping. Publishers, from newsrooms to <a href="https://diff.wikimedia.org/2025/04/01/how-crawlers-impact-the-operations-of-the-wikimedia-projects/">public interest sites like Wikipedia</a>, now face high server costs, lost revenue, and the reproduction of their work in AI outputs without permission or compensation. At the same time, governments want to support innovation, attract AI companies and keep their countries competitive (<a href="https://www.techpolicy.press/the-uk-struggles-to-balance-ai-innovation-and-creative-protection/">including here in the UK</a>).</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Appraise Network! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Meanwhile, the Internet Engineering Task Force (IETF) is <a href="https://datatracker.ietf.org/wg/aipref/about/">developing new standards</a> that will let publishers signal, in a clear, machine-readable way, whether their content can be used for AI training. But even once these standards are released, many pieces of content on the web will stilll lack any clear consent signal because the standards will be new and adoption will take time.</p><p>That brings us to a key policy question: in the future, <strong>should AI developers be allowed to train on online content by default, or only when permission is explicitly given?</strong></p><h3><strong>What is opt-in?</strong></h3><p>Under an <em>opt-in</em> model, AI developers would <strong>need explicit permission</strong> before using a website&#8217;s content for training. Think of it as: <em>&#8220;AI use is not allowed unless I say yes.&#8221;</em></p><p><strong>Pros:</strong></p><ul><li><p>Strong protection of publishers&#8217; rights and consent.</p></li><li><p>Clear, enforceable expectations for AI developers.</p></li><li><p>Works with emerging technical standards that give websites a clear, machine-readable way to express consent for AI training.</p></li></ul><p><strong>Cons:</strong></p><ul><li><p>Most websites currently do not signal AI preferences, meaning far less content would be available for training.</p></li><li><p>Smaller organisations may find it harder to update their content and adopt new settings.</p></li><li><p>Could limit access for legitimate research projects that rely on broad datasets.</p></li></ul><h3><strong>What is opt-out?</strong></h3><p>Under an <em>opt-out</em> model, AI developers can use content <strong>unless a publisher says no</strong>. Think of it as: <em>&#8220;AI use is allowed unless I tell you to stop.&#8221;</em></p><p><strong>Pros:</strong></p><ul><li><p>Easier for AI developers and researchers to access data.</p></li><li><p>Minimal friction for innovation and model development.</p></li><li><p>Simple for large platforms that already support automated crawling controls.</p></li></ul><p><strong>Cons:</strong></p><ul><li><p>Puts the burden of protection on publishers, many of whom don&#8217;t even know opting out is possible.</p></li><li><p>Risks widespread unlicensed use of digital content.</p></li><li><p>Advantages major AI firms who can strike exclusive deals with big publishers.</p></li><li><p>Fails to prevent AI training on downstream copies of a work: screenshots, reposts, embeds and scraped versions that appear on sites the publisher doesn&#8217;t control.<br></p></li></ul><h3><strong>Why does this matter for the UK?</strong></h3><p>The UK&#8217;s recent <a href="https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence">AI copyright consultation</a> leaned toward an opt-out approach, which supporters like Nick Clegg <a href="https://www.theverge.com/news/674366/nick-clegg-uk-ai-artists-policy-letter">agree with</a>, arguing that requiring permission before training would be &#8220;implausible&#8221; and would &#8220;basically kill the AI industry in Britain overnight.&#8221; Creator advocates like Ed Newton-Rex, a composer and founder of the nonprofit Fairly Trained, <a href="https://static1.squarespace.com/static/5cc5785816b6406e50258c5c/t/67368c12cc35b5469feb0bfd/1731628050768/The+insurmountable+problems+with+generative+AI+opt-outs.pdf">argue that opt-outs give creators only the illusion of control</a>. You can block AI crawling on your own site, but AI companies can still train on the countless &#8220;downstream copies&#8221; of your work that appear elsewhere online: screenshots, embeds, quotes, reposts, ads and scraped versions you don&#8217;t control. In addition, evidence suggests that most people who have the option to opt out of generative AI training don&#8217;t know that they can, and the administrative burden of opting out all of your content can be huge. Last spring, <a href="https://www.musicweek.com/digital/read/dua-lipa-elton-john-paul-mccartney-more-call-on-government-to-protect-copyright-ahead-of-ai-vote/091936">Dua Lipa, Elton John, Paul McCartney and many more UK artists wrote to the Prime Minister</a>, urging him to give government support to proposals that would protect copyright in relation to AI.</p><h2><strong>What do </strong><em><strong>you</strong></em><strong> think?</strong></h2><p>Which model should the UK adopt for AI training on online content?</p><div class="poll-embed" data-attrs="{&quot;id&quot;:410037}" data-component-name="PollToDOM"></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Appraise Network! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI policy leaders’ series: Mark Bailey, Department Chair for Cyber Intelligence and Data Science at the U.S. National Intelligence University]]></title><description><![CDATA[Mark Bailey on the debate surrounding autonomous weapons, and the interpretability of AI.]]></description><link>https://www.appraisenetwork.ai/p/ai-policy-leaders-series-mark-bailey</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/ai-policy-leaders-series-mark-bailey</guid><dc:creator><![CDATA[Aidan Muller]]></dc:creator><pubDate>Wed, 19 Nov 2025 14:28:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!geaM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c28697a-d81c-4fca-94be-8e1eca025e36_1947x1947.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>The National Intelligence University is a federal research university within the United States government to train the U.S. intelligence community. Mark is the university&#8217;s chair for Cyber Intelligence and Data Science, and is also the author of Unknowable Minds: Philosophical Insights on AI and Autonomous Weapons, a book that explores the extent to which we are able to understand modern AI, and contends that the limitations on interpretability have implications for accountability in critical situations.</em></p><p><em><strong>The views expressed here are Mark&#8217;s own and do not necessarily represent the views of the National Intelligence University or the US government.</strong></em></p><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!geaM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c28697a-d81c-4fca-94be-8e1eca025e36_1947x1947.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!geaM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c28697a-d81c-4fca-94be-8e1eca025e36_1947x1947.jpeg 424w, https://substackcdn.com/image/fetch/$s_!geaM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c28697a-d81c-4fca-94be-8e1eca025e36_1947x1947.jpeg 848w, https://substackcdn.com/image/fetch/$s_!geaM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c28697a-d81c-4fca-94be-8e1eca025e36_1947x1947.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!geaM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c28697a-d81c-4fca-94be-8e1eca025e36_1947x1947.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!geaM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c28697a-d81c-4fca-94be-8e1eca025e36_1947x1947.jpeg" width="268" height="268" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9c28697a-d81c-4fca-94be-8e1eca025e36_1947x1947.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:268,&quot;bytes&quot;:2104375,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.appraisenetwork.ai/i/178490713?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c28697a-d81c-4fca-94be-8e1eca025e36_1947x1947.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!geaM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c28697a-d81c-4fca-94be-8e1eca025e36_1947x1947.jpeg 424w, https://substackcdn.com/image/fetch/$s_!geaM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c28697a-d81c-4fca-94be-8e1eca025e36_1947x1947.jpeg 848w, https://substackcdn.com/image/fetch/$s_!geaM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c28697a-d81c-4fca-94be-8e1eca025e36_1947x1947.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!geaM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c28697a-d81c-4fca-94be-8e1eca025e36_1947x1947.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h4><strong>There are inherent limits to the extent that we can understand the AI &#8220;mind.&#8221;</strong></h4><p>I am interested in complexity, and in particular the properties of very complex systems. Typically, the properties that emerge from these systems aren&#8217;t predictable. This is a big problem in philosophy and in maths. And it leads to a problem that I call algorithmic incompressibility. This means that there is no shorter description or &#8220;shortcut&#8221; for the system&#8217;s behavior than the behavior itself. In other words, the only way to know what it will do is to run it step by step and watch.</p><p>Take Newton&#8217;s laws &#8211; these were approximations of natural laws, which were good enough for a long time. They allowed us to send people into orbit. Then came Einstein&#8217;s special and general relativity, which is an even better approximation. This allowed us to make even better predictions, and enabled things like satellite navigation. Now compare that to markets. These are too complex to make accurate predictions, we don&#8217;t have the tools. We have to model behaviours. And that&#8217;s what we&#8217;re looking at with AI.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.appraisenetwork.ai/subscribe?"><span>Subscribe now</span></a></p><p></p><h4><strong>AI should not be given the licence to make life-or-death decisions.</strong></h4><p>I argue in my book that the AI &#8220;mind&#8221; is fundamentally unknowable. Machine systems solve problems through statistical optimization and emergent dynamics rather than human-style deliberation. So we will often be unable to reconstruct or predict their choices. That knowledge gap creates unacceptable risk when life-and-death decisions are delegated to machines.</p><p>We&#8217;ve been talking about different autonomy modes: fully autonomous, &#8220;human-in-the-loop,&#8221; &#8220;human-on-the-loop.&#8221; But these guardrails are increasingly brittle when speed and complexity rise. Supervisory humans can become outpaced, or reduced to rubber stamps, and that reopens the very accountability gaps that these frameworks intend to close. Ultimately, what I am considering is: does allowing AI to make the decision on whether humans should live or die respect human dignity?</p><p></p><h4><strong>Autonomous weapons make war more likely, not less.</strong></h4><p>For proponents of AI, autonomous weapons are appealing. They are seen as a way to limit the human cost of conflict. If it&#8217;s robots killing robots, it&#8217;s easy to argue that we&#8217;re reducing loss of life, and therefore that could be seen as a good thing.</p><p>But we need to be careful of what we unleash. Every war has a political cost &#8211; part of which is loss of life &#8211; and that exerts a downward pressure on the appetite for conflict. If you eliminate that, it lowers the cost of entering wars or of prolonging conflict. In the end, autonomous weapons make protracted wars more likely, not less.</p><p></p><h4><strong>The conversation is happening at intergovernmental but also at departmental level.</strong></h4><p>In the US, the political debate on autonomous weapons has slowed. Certain elements of the Department of War are very concerned about autonomous weapons, though it seems to me that some political leaders don&#8217;t seem as concerned. But I think the US can, and should, shape the debate internationally.</p><p>I think the current administration could facilitate these conversations, and we need to have this debate to avoid a race to the bottom. Take nuclear weapons &#8211; if we didn&#8217;t have these global conversations, it would have spiralled. I don&#8217;t see this changing given the current political climate, and of course, we have to square that with the fact we want to be competitive with other powers like China or Russia. But I remain hopeful because the US military in general is concerned about these issues.</p><p>And I think the UK also has a role to play. The US and the UK are both well-positioned to influence this debate going forward. We have a strong diplomatic relationship, and we ought to leverage that to shape this debate before our adversaries beat us to it.</p><p></p><h4><strong>This is as much a philosophical question as a policy question.</strong></h4><p>Policy is a pragmatic application to a particular problem. The philosophical debate ought to inform the policy debate. But the normative questions about technology are significantly outpaced by the question of whether or not we can do it.</p><p>Our previous experience with social media tells us our initial perceptions on the benefits of a technology can be wrong. Social media was going to connect us, bring us together, give voices to those who never had one. But turns out it compromised our agreement on facts, promoted authoritarian tendencies, and undermined democratic government.</p><p>At least this time we&#8217;re thinking ahead &#8211; not just about artificial general intelligence or super intelligence, but also about the impact of disinformation, biased datasets, and the like. But we desperately need to make philosophical conversations relevant to policy people.</p><p></p><h4><strong>These conversations give me hope.</strong></h4><p>How optimistic should we be about our ability to curb the human instinct to build it because we <em>can</em>, rather than because we <em>should</em>? I&#8217;m optimistic. History is littered with examples where we&#8217;ve flown close to the line but averted disaster.</p><p>I hope that we can do it this time again, and that our continued quest for technology development is not what gets us unstuck. AI is a paradigm shift as it&#8217;s the first time we&#8217;re not able to explain what we&#8217;ve created. We need to have more of these conversations to make sure it doesn&#8217;t get out of hand.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Appraise Network! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[International AI Safety Report: A Conversation with Shalaleh Rismani of Mila - Quebec AI Institute Institute]]></title><description><![CDATA[Inside the thinking behind the International AI Safety Report&#8217;s newest update on AI capabilities and risks.]]></description><link>https://www.appraisenetwork.ai/p/international-ai-safety-report-a</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/international-ai-safety-report-a</guid><dc:creator><![CDATA[Audrey Hingle]]></dc:creator><pubDate>Thu, 13 Nov 2025 17:45:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!5ySc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c3df817-27a8-4670-a21b-faa75462cd58_557x784.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was originally published in the <a href="http://internet.exchangepoint.tech/international-ai-safety-report-a-conversation-with-shalaleh-rismani-of-mila-quebec-ai-institute-institute/">Internet Exchange</a>, a newsletter on the open social web exploring Internet governance, digital rights, and the intersection of technology and society.</em></p><p>The <em><a href="https://internationalaisafetyreport.org/?ref=internet.exchangepoint.tech">International AI Safety Report</a></em> brings together research from experts around the world to provide a shared evidence base on the capabilities and risks of advanced AI systems. My colleague <a href="https://internet.exchangepoint.tech/tag/author-mallory-knodel/">Mallory Knodel</a> saw the main report presented at the United Nations General Assembly earlier this year, where it was introduced as part of an effort to inform global cooperation on AI governance.</p><p>To better understand the thinking behind the report and its recent update, I spoke with <a href="https://scholar.google.ca/citations?user=6ZhMlWMAAAAJ&amp;hl=en&amp;ref=internet.exchangepoint.tech">Shalaleh Rismani of Mila - Quebec AI Institute</a>, one of the authors of the recent <em><a href="https://internationalaisafetyreport.org/publication/first-key-update-capabilities-and-risk-implications?ref=internet.exchangepoint.tech">First Key Update</a></em>. The update focuses on rapid advances in AI reasoning capabilities and examines how those developments intersect with emerging risks, including cybersecurity, biological threats, and impacts on labor markets. You can read both the report and the update at <a href="https://internationalaisafetyreport.org/?ref=internet.exchangepoint.tech">internationalaisafetyreport.org</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://internationalaisafetyreport.org/publication/first-key-update-capabilities-and-risk-implications" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5ySc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c3df817-27a8-4670-a21b-faa75462cd58_557x784.png 424w, https://substackcdn.com/image/fetch/$s_!5ySc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c3df817-27a8-4670-a21b-faa75462cd58_557x784.png 848w, https://substackcdn.com/image/fetch/$s_!5ySc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c3df817-27a8-4670-a21b-faa75462cd58_557x784.png 1272w, https://substackcdn.com/image/fetch/$s_!5ySc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c3df817-27a8-4670-a21b-faa75462cd58_557x784.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5ySc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c3df817-27a8-4670-a21b-faa75462cd58_557x784.png" width="211" height="296.9910233393178" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1c3df817-27a8-4670-a21b-faa75462cd58_557x784.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:784,&quot;width&quot;:557,&quot;resizeWidth&quot;:211,&quot;bytes&quot;:156022,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://internationalaisafetyreport.org/publication/first-key-update-capabilities-and-risk-implications&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.appraisenetwork.ai/i/178800010?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c3df817-27a8-4670-a21b-faa75462cd58_557x784.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5ySc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c3df817-27a8-4670-a21b-faa75462cd58_557x784.png 424w, https://substackcdn.com/image/fetch/$s_!5ySc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c3df817-27a8-4670-a21b-faa75462cd58_557x784.png 848w, https://substackcdn.com/image/fetch/$s_!5ySc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c3df817-27a8-4670-a21b-faa75462cd58_557x784.png 1272w, https://substackcdn.com/image/fetch/$s_!5ySc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c3df817-27a8-4670-a21b-faa75462cd58_557x784.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Why this report, and why now? What gap did the team hope to fill in the global AI safety conversation?</strong></p><p>This is the second year the safety report has been produced as a collaborative project. The main report&#8217;s scope was set early by the lead writers and panelists, with input from experts around the world. The goal was to synthesize evidence on the most advanced AI systems, including technologies already being rolled out and others still in development, in a way that would be useful for policymakers.</p><p>As the field evolved, the team realized that one annual report was not enough to keep up with the pace of change. This year, the leadership decided to produce two interim updates in addition to the main report. The first, released in October, focused heavily on capabilities, particularly what researchers refer to as &#8220;reasoning capabilities.&#8221; These include systems that can generate multiple possible answers or ask clarifying questions before responding. The second update, coming at the end of November, will continue tracking those advances, while the next full report will be published in February.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.appraisenetwork.ai/subscribe?"><span>Subscribe now</span></a></p><p><strong>The report cites thousands of studies. How did the team ensure that this huge body of research remains usable for policymakers and practitioners?</strong></p><p>The main goal is to bring in as much evidence from the academic literature as possible and make it accessible to policymakers and the public. Each section is led by researchers embedded in the literature, and multiple rounds of revisions happen with expert reviewers.</p><p>Every citation goes through a vetting process to confirm that it comes from credible academic sources. Because AI research moves so fast, much of the work is pre-published, which makes it harder to assess. Still, the idea is to present the full range of research and show both where strong evidence exists and where gaps remain.</p><p><strong>Publishing is one thing, but ensuring impact is another. How does the team think about getting the report in front of key audiences?</strong></p><p>The dissemination strategy is a collaborative effort between the Chair, the writing team and the secretariat. The team participates in many briefings with governments and policymakers around the world. For example, we engaged directly with policymakers on the findings of the first key update, including from the EU, India, UK, Canada, Singapore, UAE, Australia, Japan, Kenya and others. Because panelists, senior advisers, and reviewers come from different countries, there is already strong buy-in. Civil society, academia, and major technology companies are also involved in the process, which helps expand the report&#8217;s reach.</p><p><strong>How did the team integrate human rights considerations into what is otherwise a very technical safety framework?</strong></p><p>Human rights are not presented as a standalone section, but they are integrated throughout the report. One way is by identifying where evidence exists and where it does not, which highlights gaps relevant to fairness, privacy, and equity. Many evaluations measure performance on benchmarks but not real-world outcomes. Pointing out those gaps helps guide future human rights work by showing where contextual studies are needed.</p><p>Some of the risks discussed in this update also touch directly on human rights. For example, the growing adoption of AI companionship technologies raises concerns about loneliness and emotional well-being. The report also notes early evidence of labor market impacts, particularly in software engineering, although broader economic effects are still unclear.</p><p><strong>The report came out of a large international process. What did that collaboration reveal about where consensus exists and where it still breaks down when it comes to defining and governing AI safety?</strong></p><p>There is broad agreement that AI systems are improving on certain benchmarks, but less consensus on whether those benchmarks accurately measure complex abilities like reasoning. Some experts question whether the current evaluation frameworks are valid for assessing reasoning at all.</p><p>There is also consensus that potential risks should be monitored proactively rather than ignored, though there is debate about which risks are most pressing. Monitoring and controllability risks, for instance, are still contested. Some lab studies suggest models underperform when they know they are being evaluated, while others do not show this effect. In contrast, there is stronger agreement around risks such as AI companionship, labor market disruption, and cyber offense and defense.</p><p><strong>The report brings together such a wide range of evidence and perspectives. How do you think about assessing risk and avoiding overhyping progress?</strong></p><p>The report does not use a specific framework to assess risk. There are frameworks being proposed for evaluating AI systems, and we report on developments in those frameworks rather than applying one ourselves.</p><p>We also recognize the risk of overhyping AI progress, especially right now. To address this, we try to look for real-world evidence of both improvements and shortcomings. The review processes and involvement of stakeholders are other ways this can be managed and help keep the report balanced.</p><p><strong>If you had to highlight one or two takeaways that you hope will shape AI policy or practice in 2026, what would they be?</strong></p><p>There is a significant gap in evaluating real-world impacts. Policymakers need a clearer understanding of how AI systems affect work, research, and society, not just benchmark scores. Creating infrastructure to support independent evaluations and audits will be key, whether through third-party organizations or public feedback mechanisms.</p><p>The second update, coming later this year, will focus on risk management practices and the solutions being proposed to address them. The goal is to show that progress is happening while recognizing that there is still much more work to do.</p>]]></content:encoded></item><item><title><![CDATA[AI policy leaders’ series: Alisar Mustafa, Head of AI Policy & Safety at Duco]]></title><description><![CDATA[Alisar is a seasoned expert in AI policy and regulation with over a decade of experience navigating the intersections of technology, ethics, and governance.]]></description><link>https://www.appraisenetwork.ai/p/ai-policy-leaders-series-alisar-mustafa</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/ai-policy-leaders-series-alisar-mustafa</guid><dc:creator><![CDATA[James Boyd-Wallis]]></dc:creator><pubDate>Thu, 06 Nov 2025 08:47:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wfSz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F675dbbb2-89b9-4da8-b3a9-71ab4f9038d6_800x800.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Alisar is a seasoned expert in AI policy and regulation with over a decade of experience navigating the intersections of technology, ethics, and governance. At Duco, Alisar helps companies move from high-level principles to real implementation. Her work focuses on translating laws and frameworks into system design requirements, risk controls, and mitigation strategies at scale. She also writes a weekly<a href="https://alisarmustafa.substack.com/"> AI Policy Newsletter.</a></em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wfSz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F675dbbb2-89b9-4da8-b3a9-71ab4f9038d6_800x800.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wfSz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F675dbbb2-89b9-4da8-b3a9-71ab4f9038d6_800x800.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wfSz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F675dbbb2-89b9-4da8-b3a9-71ab4f9038d6_800x800.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wfSz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F675dbbb2-89b9-4da8-b3a9-71ab4f9038d6_800x800.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wfSz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F675dbbb2-89b9-4da8-b3a9-71ab4f9038d6_800x800.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wfSz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F675dbbb2-89b9-4da8-b3a9-71ab4f9038d6_800x800.jpeg" width="318" height="318" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/675dbbb2-89b9-4da8-b3a9-71ab4f9038d6_800x800.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:800,&quot;width&quot;:800,&quot;resizeWidth&quot;:318,&quot;bytes&quot;:90687,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.appraisenetwork.ai/i/178113375?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F675dbbb2-89b9-4da8-b3a9-71ab4f9038d6_800x800.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wfSz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F675dbbb2-89b9-4da8-b3a9-71ab4f9038d6_800x800.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wfSz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F675dbbb2-89b9-4da8-b3a9-71ab4f9038d6_800x800.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wfSz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F675dbbb2-89b9-4da8-b3a9-71ab4f9038d6_800x800.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wfSz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F675dbbb2-89b9-4da8-b3a9-71ab4f9038d6_800x800.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>AI is still evolving, and it&#8217;s high-stakes</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Appraise Network! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>AI policy matters because we&#8217;re regulating a technology that&#8217;s still evolving, and the stakes couldn&#8217;t be higher. Regulation and innovation aren&#8217;t opposites: strong policy creates trust and prevents the kinds of harms that could stall progress altogether. The real competition should be about who can build the best AI that benefits people while also preventing harm.</p><p>Those harms aren&#8217;t theoretical. The Internet Watch Foundation<a href="https://www.iwf.org.uk/news-media/news/full-feature-length-ai-films-of-child-sexual-abuse-will-be-inevitable-as-synthetic-videos-make-huge-leaps-in-sophistication-in-a-year/"> found</a> just two AI-generated Child Sexual Abuse Material videos last year, for instance. This year, the organisation has confirmed nearly 1,300 so far and counting. So, the company that figures out how to innovate while preventing these harms will set the model others follow. As Anthropic&#8217;s CEO Dario Amodei put it, we need a race to the top, not a race to the bottom.</p><p><strong>Translating principles into practice</strong></p><p>One of the biggest problems in AI policy today is the lack of technical implementation guidance. Principles like &#8220;fairness&#8221; and &#8220;minimising harm&#8221; are essential, but without clear definitions and real-world constraints, they don&#8217;t translate into safer systems. In practice, everything starts with the data. For example, <a href="https://www.ducoexperts.com/duco-human-generated-datasets">one project</a> I&#8217;ve been leading at Duco involves fine-tuning models using human-generated data on high-risk topics across low-resource languages&#8212;areas where models are most likely to break.These topics evolve fast, vary across regions, and carry serious risks of bias, misinformation, and harm.</p><p>We work with global experts to define critical issues and generate prompts and responses that reflect multiple perspectives and prioritise factual accuracy. As AI expands into low-resource markets, this kind of targeted data becomes even more important. If we want AI to be safe and aligned, policymakers need to provide clear technical pathways, including data standards, alongside clear outcomes.</p><p><strong>Building safety from the start</strong></p><p>Good regulation compels companies to assess the safety and risks of their systems. For instance, under the EU&#8217;s Digital Services Act, companies must measure harms, engage researchers, and build internal systems. When done right, regulation doesn&#8217;t just enforce compliance, it drives meaningful investment in safety.</p><p>That&#8217;s why safety research has to be built in from the beginning. In practice, this means tying obligations to measurable artefacts such as model cards, data statements, evaluation logs, and post-deployment incident records. These give regulators something concrete to assess.</p><p>However, governments also need to invest in public infrastructure by publishing reference tests, releasing red-team scenarios for known harms, and ensuring these reflect low-resource and multilingual contexts. Alongside this, governments should consider creating safe harbours that protect companies that disclose failures in good faith. Without that, we&#8217;ll never get honest reporting.</p><p>The EU AI Act&#8217;s Code of Practice exemplifies the technically-informed policy approach I advocate. The Code moves beyond vague principles to provide concrete implementation guidance through measurable artefacts, such as model cards, evaluation logs, and documented risk mitigation strategies.</p><p><strong>Bridging policy and practice</strong></p><p><a href="https://www.ducoexperts.com/ai-services">Duco</a> stands apart by providing organisations with solutions that directly bridge policy and practice. Our team brings deep technical knowledge and regulatory expertise to deliver AI Adversarial Monitoring &amp; Red-Teaming, AI Training &amp; Fine-Tuning for high-risk use cases, and custom Safety Evaluation Datasets. These services are designed to operationalise complex regulations efficiently, helping organisations not just comply but improve the safety, reliability, and global readiness of their AI systems.</p><p>In addition to technical implementation, Duco uniquely guides organisations in navigating the global regulatory landscape. We work closely with leading tech companies to analyse cross-jurisdictional regulatory requirements&#8212;such as US federal and state differences, EU directives, and APAC compliance. Our integrated strategy ensures clients not only keep pace but also gain a competitive edge in global markets by aligning compliance with business objectives and sustainable market access.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Appraise Network! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI policy leaders’ series: Christabel Randolph, Associate Director at the Center for AI and Digital Policy]]></title><description><![CDATA[Christabel Randolph on the evolution of global AI policy and the shared principles guiding governments worldwide.]]></description><link>https://www.appraisenetwork.ai/p/ai-policy-leaders-series-christabel</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/ai-policy-leaders-series-christabel</guid><dc:creator><![CDATA[James Boyd-Wallis]]></dc:creator><pubDate>Thu, 16 Oct 2025 07:40:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!_iic!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7420f80b-0f14-4782-9421-a580d17c9b9d_480x480.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1></h1><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_iic!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7420f80b-0f14-4782-9421-a580d17c9b9d_480x480.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_iic!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7420f80b-0f14-4782-9421-a580d17c9b9d_480x480.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_iic!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7420f80b-0f14-4782-9421-a580d17c9b9d_480x480.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_iic!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7420f80b-0f14-4782-9421-a580d17c9b9d_480x480.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_iic!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7420f80b-0f14-4782-9421-a580d17c9b9d_480x480.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_iic!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7420f80b-0f14-4782-9421-a580d17c9b9d_480x480.jpeg" width="480" height="480" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7420f80b-0f14-4782-9421-a580d17c9b9d_480x480.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:480,&quot;width&quot;:480,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:50494,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://theappraisenetwork.substack.com/i/175639649?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b4732a-f91a-4bf2-8874-032b0383df81_480x720.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_iic!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7420f80b-0f14-4782-9421-a580d17c9b9d_480x480.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_iic!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7420f80b-0f14-4782-9421-a580d17c9b9d_480x480.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_iic!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7420f80b-0f14-4782-9421-a580d17c9b9d_480x480.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_iic!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7420f80b-0f14-4782-9421-a580d17c9b9d_480x480.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>The Center for AI and Digital Policy (CAIDP) aims to ensure that artificial intelligence and digital policies promote a better society where technology promotes broad social inclusion based on fundamental rights, democratic institutions, and the rule of law. We spoke to Christabel about how governments are developing AI policy, the evolution of AI policy, the importance of fundamental principles, and the role that the CAIDP plays in helping promote the development of AI policy that benefits people.</em></p><h2><strong>Governments worldwide are engaged in determining how to govern and regulate AI</strong></h2><p>However, what stands out is that across countries, the development of AI policy has been more evolutionary than driven by friction. Policymakers are often not working at cross purposes. Instead, they are building on shared foundations and principles as the technology continues to advance.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Appraise Network! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>A significant milestone was reached in 2018. More than 300 experts in over 40 countries endorsed the Universal Guidelines for AI (UGAI) announced at the 2018 International Data Protection and Privacy Commissioners Conference. These guidelines established the importance of baseline principles or guardrails such as fairness, transparency, accuracy and accountability. Institutions such as the OECD and UNESCO later incorporated these principles in their own frameworks. In addition, some 193 countries subsequently adopted the UNESCO Recommendation on Ethics of AI. More recently, declarations such as those agreed upon at Bletchley Park and Seoul have reinforced these foundational principles.</p><p>The purpose of the principles in UGAI was and is to establish a baseline, or a floor, that applies across different legal systems and stages of AI development. So, whether used in the United States, which is home to the world&#8217;s largest technology companies, or in India, home to a growing base of tech talent, or in countries still building AI infrastructure, the core concerns remain the same.</p><h2><strong>Policymakers globally are often working with familiar and consistent principles</strong></h2><p>While individual countries may choose to prioritise the principles in a different order and may decide to focus on different sectors or use cases, they are fundamentally working within the same core guidance.</p><p>For example, the African Union has made significant strides in AI governance. The Union has adopted a continental strategy that addresses AI governance to maximise benefits and minimise risks through data governance, education/capabilities, investment, and democratic accountability. It sets clear priorities and pushes for convergence across member states and international cooperation. Meanwhile, Saudi Arabia&#8217;s AI Ethics Principles (2023) has a national emphasis but remains an example of continuity, reflecting many of the principles first set out in 2018.</p><h2><strong>One of the biggest challenges comes from the way commercial interests compete for influence over policy</strong></h2><p>Companies seek favourable legislation. We see it in the case of taxation or regulatory oversight. This is not unusual; it is how they operate. However, it does create friction between different regions.</p><p>We often hear that one country&#8217;s approach is superior, and that others should follow the U.S. or the EU. This competition plays out in measurable ways. The CAIDP Index, the first global survey of trustworthy AI, looks at 80 countries and shows how AI policies range from broad documents to binding regulations. The CAIDP Index is specifically aligned with human rights instruments and assesses each country&#8217;s implementation. The challenge for governments is to ensure the pull of commercial interests and AI policy that serves broader public interest goals.</p><h2><strong>The rapid commercialisation of generative AI has introduced new risks and raised fresh questions</strong></h2><p>Policy and technical approaches across countries converge on accuracy, reliability, transparency and safety. For instance, China&#8217;s recent Global AI Governance Action Plan, announced shortly after the U.S., places strong emphasis on AI integration into trade and industry, supported by high-quality datasets, AI guardrails, and sustainability goals. This is similar to other jurisdictions.</p><p>However, while no one disagrees about the importance of AI safety, there is debate on what fairness or bias means in practice. The White House, across successive administrations, has reaffirmed fairness as a principle, but translating it into binding evaluation standards remains to be enforced.</p><h2><strong>Commercial pressures complicate this picture</strong></h2><p>In the United States, for instance, there is no federal data protection agency, yet the same companies comply with stricter regimes in China, the EU, and India. This shows that regulation does not prevent firms from operating profitably.</p><p>For policymakers, the challenge is to ensure that baseline safeguards are in place while also using their own comparative strengths.</p><p>For developing countries, this might involve favouring regulatory sandboxes or investment incentives. But these flexible approaches still need to be grounded in strong protections for rights and fundamental freedoms.</p><h2><strong>The CAIDP plays a central role in building expertise and spreading consensus around such protections</strong></h2><p>One of our main initiatives is our AI Policy Clinics. These clinics began with just 20 participants and have since grown exponentially with each cohort. More than 300 people have enrolled in the Fall 2025 cohort. Over the last five years, CAIDP has trained more than 1,500 civil society advocates, policymakers and practitioners, rights defenders, lawyers, technologists and academics. The alumni network covers more than 120 countries.</p><p>Participants gain a deep understanding of AI governance and how countries are making progress. Many alumni have gone on to policy positions, helping to embed baseline safeguards and principles in practice. Multiple countries now reference CAIDP&#8217;s AI governance recommendations in official policy and guidance.</p><p>We also publish the CAIDP Index, offering independent analysis of national strategies and tracking global shifts in AI policy. Beyond research, CAIDP is now leading global efforts for the ratification of the International AI treaty grounded in human rights and the rule of law.</p><p>CAIDP expertise has improved global standards for AI accountability, influencing both policy development and implementation of guardrails, including bans and controls on mass biometric surveillance. Most recently, CAIDP&#8217;s advocacy led OECD to adopt a new definition of privacy-enhancing technologies (PETs).</p><p>Overall, we aim to demonstrate that countries and governments do not need to rewrite the principles and guidelines on AI governance. The real urgency lies in implementation and oversight. With the EU AI Act now coming into force, the focus must shift to ensuring it works in practice and inspires similar action elsewhere.<br></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Appraise Network! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Welcome to Appraise on Substack]]></title><description><![CDATA[A fresh chapter for Appraise. Thanks for coming along with us!]]></description><link>https://www.appraisenetwork.ai/p/welcome-to-appraise-on-substack</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/welcome-to-appraise-on-substack</guid><dc:creator><![CDATA[Aidan Muller]]></dc:creator><pubDate>Wed, 15 Oct 2025 14:38:53 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b70fa773-96b1-4278-a02f-0824a4d0c477_400x400.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We started Appraise to bring together policy and advocacy professionals who care about responsible AI &#8212; and to help shape the debate, not just follow it. </p><p>As part of that mission, we&#8217;re moving our updates to <strong>Substack</strong>, making it easier for you to keep up to date with our research, analysis and events.</p><p>We&#8217;ll be sending our first update shortly, but in the meantime we&#8217;d love to get a sense of what matters most to you:</p><div class="poll-embed" data-attrs="{&quot;id&quot;:390115}" data-component-name="PollToDOM"></div><p>We look forward to hearing from you more in the months to come.</p><p>Best,<br><strong>Aidan Muller &amp; James Boyd-Wallis</strong><br>Co-founders, The Appraise Network</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.appraisenetwork.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Appraise Network! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI policy in practice: Minh Tran]]></title><description><![CDATA[In the next in our series looking at AI policy practice, we speak to Minh Tran, AI governance and ethical AI advisor at FPT Software, a leading AI software company in Vietnam with revenues $1bn, about the progress of AI policy in Vietnam, the support that developing nations need, and how AI policy might develop in the region.]]></description><link>https://www.appraisenetwork.ai/p/ai-policy-in-practice-minh-tran</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/ai-policy-in-practice-minh-tran</guid><dc:creator><![CDATA[James Boyd-Wallis]]></dc:creator><pubDate>Wed, 24 Sep 2025 13:33:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!F0VS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57ecdfe7-d0a1-4be9-8417-f13cac490bb7_800x799.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em><strong>In the next in our series looking at AI policy practice, we speak to Minh Tran, AI governance and ethical AI advisor at FPT Software, a leading AI software company in Vietnam with revenues $1bn, about the progress of AI policy in Vietnam, the support that developing nations need, and how AI policy might develop in the region.</strong></em></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!F0VS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57ecdfe7-d0a1-4be9-8417-f13cac490bb7_800x799.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!F0VS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57ecdfe7-d0a1-4be9-8417-f13cac490bb7_800x799.jpeg 424w, https://substackcdn.com/image/fetch/$s_!F0VS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57ecdfe7-d0a1-4be9-8417-f13cac490bb7_800x799.jpeg 848w, https://substackcdn.com/image/fetch/$s_!F0VS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57ecdfe7-d0a1-4be9-8417-f13cac490bb7_800x799.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!F0VS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57ecdfe7-d0a1-4be9-8417-f13cac490bb7_800x799.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!F0VS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57ecdfe7-d0a1-4be9-8417-f13cac490bb7_800x799.jpeg" width="234" height="233.7075" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/57ecdfe7-d0a1-4be9-8417-f13cac490bb7_800x799.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:799,&quot;width&quot;:800,&quot;resizeWidth&quot;:234,&quot;bytes&quot;:69511,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://theappraisenetwork.substack.com/i/175620716?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57ecdfe7-d0a1-4be9-8417-f13cac490bb7_800x799.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!F0VS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57ecdfe7-d0a1-4be9-8417-f13cac490bb7_800x799.jpeg 424w, https://substackcdn.com/image/fetch/$s_!F0VS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57ecdfe7-d0a1-4be9-8417-f13cac490bb7_800x799.jpeg 848w, https://substackcdn.com/image/fetch/$s_!F0VS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57ecdfe7-d0a1-4be9-8417-f13cac490bb7_800x799.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!F0VS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57ecdfe7-d0a1-4be9-8417-f13cac490bb7_800x799.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p><strong>What is your role, and what area of AI governance do you focus on?</strong></p><p>In my role at FPT Software, I provide AI governance consulting for tech companies in Vietnam. Specifically, I help companies to meet AI management processes aligned with ISO 42001, the standard for AI management systems.</p><p>In addition, I conduct policy research through international programs run by organisations such as the Centre for AI and Digital Policy (CAIDP). I am also planning public outreach initiatives, including producing Vietnamese-language videos on AI ethics and designing responsible AI usage courses on platforms like Udemy.</p><p><strong>What is the awareness of AI policy and governance in Vietnam?</strong></p><p>In Vietnam, policymakers focus on economic growth, and technology companies tend to prioritise sales and product expansion over AI risk management and ethics. As a result, awareness of AI governance remains low and underdeveloped in the country. However, we are at a pivotal moment. Decisions Vietnamese policymakers make now about how they govern AI will shape our society for years to come.</p><p><strong>What progress has been made towards AI policy and AI governance in Vietnam and the region more widely?</strong></p><p>Vietnam and much of Southeast Asia remain in the early stages of AI policy development, trailing global leaders such as the EU and its AI Act. The Association of South East Asian Nations (ASEAN) has introduced its Guide on AI Governance and Ethics and its Responsible AI Roadmap. However, these frameworks lack enforceability. As a result, they are well-intentioned but are largely aspirational.</p><p>Thailand, the Philippines, and Indonesia have made progress in data protection and AI governance. However, Vietnam is still grappling with some fundamental issues. There is no dedicated AI legislation, no enforcement mechanisms, and existing ethical guidelines lack clarity and legal force. Critically, Vietnam has yet to release a national roadmap for the safe and responsible deployment of AI. This regulatory void presents serious risks, not only technical failures but also broader societal harm.</p><p><strong>How would you like AI policy to develop in the region?</strong></p><p>ASEAN should have a dedicated AI policy and research centre that brings together experts, legislators, and industry representatives to exchange knowledge and co-develop practical guidelines.</p><p>There is a significant disparity in the development of AI policies both within regions and globally, and especially within developing regions. So, ideally, Vietnam and other similar countries would establish mechanisms that enable experts from developing countries to understand and keep up with the AI policy-making process, and to contribute their voices and perspectives.</p>]]></content:encoded></item><item><title><![CDATA[AI policy leaders series: Julia Mykhailiuk]]></title><description><![CDATA[Julia focuses on the intersection of EU and international AI governance, institutional engagement and multilateral policy development.]]></description><link>https://www.appraisenetwork.ai/p/ai-policy-leaders-series-julia-mykhailiuk</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/ai-policy-leaders-series-julia-mykhailiuk</guid><dc:creator><![CDATA[James Boyd-Wallis]]></dc:creator><pubDate>Wed, 03 Sep 2025 08:29:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!lxMD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71306aab-9023-4722-a408-5bb3ab0fb7b7_1280x1258.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Julia focuses on the intersection of EU and international AI governance, institutional engagement and multilateral policy development. She recently participated in the EU General Purpose Code of Practice discussions.</strong></p><p><strong>We spoke to Julia about her role, the Code of Practice, the EU AI Act more widely, and the role civil society can play in helping shape AI policy.</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lxMD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71306aab-9023-4722-a408-5bb3ab0fb7b7_1280x1258.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lxMD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71306aab-9023-4722-a408-5bb3ab0fb7b7_1280x1258.jpeg 424w, https://substackcdn.com/image/fetch/$s_!lxMD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71306aab-9023-4722-a408-5bb3ab0fb7b7_1280x1258.jpeg 848w, https://substackcdn.com/image/fetch/$s_!lxMD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71306aab-9023-4722-a408-5bb3ab0fb7b7_1280x1258.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!lxMD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71306aab-9023-4722-a408-5bb3ab0fb7b7_1280x1258.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lxMD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71306aab-9023-4722-a408-5bb3ab0fb7b7_1280x1258.jpeg" width="342" height="336.121875" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/71306aab-9023-4722-a408-5bb3ab0fb7b7_1280x1258.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1258,&quot;width&quot;:1280,&quot;resizeWidth&quot;:342,&quot;bytes&quot;:124373,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.appraisenetwork.ai/i/177717320?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71306aab-9023-4722-a408-5bb3ab0fb7b7_1280x1258.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lxMD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71306aab-9023-4722-a408-5bb3ab0fb7b7_1280x1258.jpeg 424w, https://substackcdn.com/image/fetch/$s_!lxMD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71306aab-9023-4722-a408-5bb3ab0fb7b7_1280x1258.jpeg 848w, https://substackcdn.com/image/fetch/$s_!lxMD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71306aab-9023-4722-a408-5bb3ab0fb7b7_1280x1258.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!lxMD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71306aab-9023-4722-a408-5bb3ab0fb7b7_1280x1258.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>&#8220;Civil society is important in holding the policy-making process accountable to public interest outcomes.&#8221;</p><p><strong>What&#8217;s your role, and why is AI policy relevant to you?</strong></p><p>In my most recent role, I co-led a Brussels based think-tank, driving work on operationalising high-level EU regulatory frameworks like the AI Act by shaping compliance obligations for the providers of GPAI/GPAISR models, and high-risk AI systems.</p><p>A significant part of my work involved direct engagement with EU institutions, including the Commission, the Parliament and the European AI Office. I provided policy analysis and contributed to shaping implementation tools such as code of practice and general-purpose AI guidelines. In addition, I also support the ongoing development of international AI governance and policy frameworks in the US, UK and Indo-Pacific by contributing to the work of the OECD AI expert group, as well as holding research and policy fellowships at CAIDP, fp21 and GMF.</p><p>What led me to AI policy was the realisation that technology was rapidly outpacing the institutional capacity to govern it or set any guardrails around it. With AI, we need to build regulatory frameworks for something highly technical and fast-evolving, while also ensuring that democratic values and public interests such as transparency, accountability and public oversight do not get sidelined.</p><p><strong>You were involved in helping draft the EU General Purpose Code of Practice. What&#8217;s your view on the final wording?</strong></p><p>The final version, as it stands, is a solid foundation, especially considering the diversity of stakeholders involved in the process. It was a challenging and lengthy process, but it strikes a reasonable balance between regulatory ambition and technical feasibility. It outlines clear, testable expectations around key areas, including transparency, safety, evaluation criteria, and governance processes for AI model providers.</p><p>That said, it also leaves open some questions around enforcement and alignment with national implementation mechanisms. In the future, the key will be to ensure the code is more than just a symbolic document. It will need to become a standard that can evolve with industry practices and institutional learning.</p><p><strong>What&#8217;s your view on the EU AI Act? What works? What could be improved?</strong></p><p>The European AI Act is a landmark regulation. It&#8217;s one of the first comprehensive AI regulations globally, and it introduces a scalable risk-based approach to regulating AI. It is built around a risk classification system that emphasises the importance of robust safety requirements and accountability mechanisms, which are particularly valuable in establishing baseline expectations across diverse industry sectors integrating AI systems.</p><p>It could potentially evolve further in its operational capacity for both general-purpose AI models and high-risk systems, especially in clarifying compliance responsibilities across the value chain. Policymakers also need to pay close attention to how the Act interacts with adjacent digital policies as well as GDPR and sector specific regulations, to avoid fragmented implementation and conflicting obligations.</p><p><strong>What role do you see for civil society in shaping AI policy?</strong></p><p>Civil society is important in holding the policy-making process accountable to public interest outcomes. That is the primary and key objective of civil society work. One of the risks in tech regulation is that the loudest voices are often also the best-resourced. Civil society&#8217;s role is to bring attention to the public interest as a counterweight to that, whether by surfacing specific case incidents, making the case for stronger safety guardrails, or translating policy impacts for the public to understand what it means to them. Civil society organisations help keep that regulatory conversation grounded. The challenge is ensuring meaningful participation beyond the consultation stage. That requires engaging directly with policymakers to ensure that different voices can be heard. Making sure that the right information reaches the right policymaker at the right time is both the key challenge and the key objective.</p><p>AI providers are understandably involved in the legislative and regulatory process. They often argue that their input is necessary due to the complexity and highly technical nature of AI development. However, they are not the only ones who can understand these models, test them, or indeed decipher whether they are transparent and safe, or whether they introduce biases and risks. If we rely solely on the internally generated test results and safety information provided by these companies, then we are essentially asking them to check their own homework. This is something we do not generally allow companies to do in other safety critical regulated industries such as pharmaceuticals or engineering.</p><p><strong>What&#8217;s next for AI policy?</strong></p><p>Within the EU/EEA, it will be the implementation of the AI Act for GPAI/GPAISR models and high risk AI systems, their oversight, regulatory enforcement and wider guidance for public and private sector adoption. At the same time, AI governance does not stop at the borders of the EU member states. As we are seeing, AI development and integration is not just a regulatory challenge but a geopolitical one. The tension between how the US and China approach AI governance is already affecting how global standards are being shaped through measures such as export controls and industrial policies. Europe is in a unique position here. It does not compete on model development or raw compute power, but it has normative power to influence global AI governance. Used wisely, it can be harnessed not just for a political extension of domestic regulation, but also to ensure that democratic resilience remains important in increasingly competitive, multipolar AI landscape.</p>]]></content:encoded></item><item><title><![CDATA[AI policy in practice: Rosalie Brown]]></title><description><![CDATA[Rosalie Brown leads tech policy at TheCityUK, the industry-led body representing UK-based financial and related professional services.]]></description><link>https://www.appraisenetwork.ai/p/ai-policy-in-practice-rosalie-brown</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/ai-policy-in-practice-rosalie-brown</guid><dc:creator><![CDATA[James Boyd-Wallis]]></dc:creator><pubDate>Thu, 21 Aug 2025 08:34:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!HLM0!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf6be2d3-43ed-4a82-b156-07f1204bf521_960x960.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Rosalie Brown leads tech policy at TheCityUK, the industry-led body representing UK-based financial and related professional services. She works with experts from across the industry, regulators and the government to collectively shape a policy environment that supports technology adoption and innovation.</strong></p><p><strong>We talk to Rosalie about her view on the AI policy landscape in the UK, how TheCityUK will continue to shape AI policy and how she&#8217;d like to see AI policy develop.</strong></p><p><strong>What&#8217;s your view on AI policy landscape in the UK? What&#8217;s working and what could be improved?</strong></p><p>Well, firstly, you can&#8217;t talk about AI and financial services without discussing regulation. The UK&#8217;s pro-innovation, sectoral approach to regulating AI is welcome and creates a strong foundation for enabling growth sectors to harness the full potential of these technologies in various contexts.</p><p>I also support financial services regulators&#8217; efforts to implement this by regulating these technologies within existing frameworks, rather than creating new AI-specific rules. It&#8217;s positive that the UK isn&#8217;t taking a prescriptive approach like the EU AI Act. However, the international landscape is very fragmented, and to avoid complex and onerous compliance burdens, many global firms will need to apply the &#8216;highest watermark&#8217; set by other international regulators, limiting any adoption gains from an agile UK approach. It&#8217;s vital that the UK leverages its position internationally to influence global standards and develop common AI principles.</p><p>Beyond regulation, it&#8217;s vital that UK policy addresses the AI skills gaps. The Financial Services Skills Commission (FSSC) reports a 35% gap between AI skills demand and supply across financial services &#8211; and we are a tech-forward industry! Based on a number of recent announcements and initiatives, the government is taking this issue seriously, and the FSSC is doing lots of great work in this space, including developing a financial services AI skills compact. However, we need to ensure AI skills and literacy are accessible to everyone across the UK. Effective policy will be key to ensure that people are brought along on the AI journey and avoid deepening existing inequalities.</p><p><strong>What&#8217;s your view on the UK AI Opportunities Action Plan?</strong></p><p>Overall, the UK&#8217;s AI Opportunities Action Plan contains many positive recommendations and sets an ambitious and welcome direction for the UK. However, it&#8217;s important to note that the EU has published its AI Continent Action Plan, while the Trump administration recently released &#8216;America&#8217;s AI Action Plan&#8217;. All of these action plans set out individual jurisdictions&#8217; ambitions to be global leaders in AI. The UK plan is targeted and credible, but it&#8217;s a competitive race and will come down to delivering AI adoption, innovation and infrastructure at pace. At the same time, AI in the UK is currently largely reliant on US firms, which underscores the importance of international alignment and long-term investment in domestic capability.</p><p><strong>What role do you see for TheCityUK in continuing to shape AI policy in the UK?</strong></p><p>TheCityUK plays a leading role in championing the success of the financial and related professional services ecosystem, promoting policies in the UK and internationally that drive competitiveness, support job creation and enable long-term economic growth. Securing the UK&#8217;s position as a global leader in the use of AI is integral to this mission and a priority for our organisation.</p><p>There are a number of complexities in AI regulation for financial services that require close and ongoing collaboration between regulators, policymakers and industry to find solutions and share emerging best practices. This is key to ensure the industry can harness the full potential of these technologies while managing the risks and avoiding burdensome or complex compliance that would ultimately harm UK competitiveness and economic growth.</p><p>As the policy landscape evolves from high-level principles to detailed implementation, TheCityUK will play a central role in shaping a regulatory environment that is agile, proportionate, and internationally coherent. Through thought leadership, industry coordination, and constructive engagement with government and regulators, we aim to ensure that the UK remains at the forefront of responsible AI adoption across our industry.</p><p><strong>In your view, what&#8217;s next for AI policy &#8211; where and how would you like to see it develop?</strong></p><p>What&#8217;s next? Who <em>really</em> knows! It&#8217;s still unclear whether the government will introduce legislation targeting frontier AI models in the near future, but there will likely be a growing emphasis on third-party assurance to verify the trustworthiness of AI systems.</p><p>Over the next few years, we will move from embedding the UK&#8217;s AI principles into more mature regulatory approaches across different sectors. This needs to be supported by stronger coordination between regulators and progress on technical standards.</p><p>As global discussions mature, the UK will also need to navigate the balance between maintaining its bespoke, pro-innovation framework and aligning with international approaches. Overall, I hope we will continue to see policy that gives firms the confidence, clarity, and capability to scale AI responsibly.</p><p>I&#8217;d like to see a more cohesive global approach and policy that drives greater trust and understanding of AI across our society. Beyond what I&#8217;ve already mentioned, I believe it&#8217;s essential we make more progress on the environmental sustainability of AI. To date, environmental considerations have been largely absent from mainstream AI policy discussions. The government&#8217;s AI Energy Council is a good starting point, but I would like to see our AI ambitions matched by commitments to develop greener, more energy-efficient AI systems.</p>]]></content:encoded></item><item><title><![CDATA[AI policy in practice: Shradha Mathur]]></title><description><![CDATA[Founded in 2023, OpenSphere helps maximise visa approval chances with AI-powered assistance and legal guidance.]]></description><link>https://www.appraisenetwork.ai/p/ai-policy-in-practice-shradha-mathur</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/ai-policy-in-practice-shradha-mathur</guid><dc:creator><![CDATA[James Boyd-Wallis]]></dc:creator><pubDate>Mon, 04 Aug 2025 13:31:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CEDC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff0599a0-2578-4be3-98e4-8b15b9c56d5f_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Founded in 2023, OpenSphere helps maximise visa approval chances with AI-powered assistance and legal guidance. We talk to the firm&#8217;s legal operations lead Shradha N Mathur about her role, how AI policy isn&#8217;t abstract and why we need greater participation in the policy process.</strong></p><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CEDC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff0599a0-2578-4be3-98e4-8b15b9c56d5f_1080x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CEDC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff0599a0-2578-4be3-98e4-8b15b9c56d5f_1080x1080.png 424w, https://substackcdn.com/image/fetch/$s_!CEDC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff0599a0-2578-4be3-98e4-8b15b9c56d5f_1080x1080.png 848w, https://substackcdn.com/image/fetch/$s_!CEDC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff0599a0-2578-4be3-98e4-8b15b9c56d5f_1080x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!CEDC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff0599a0-2578-4be3-98e4-8b15b9c56d5f_1080x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CEDC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff0599a0-2578-4be3-98e4-8b15b9c56d5f_1080x1080.png" width="316" height="316" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ff0599a0-2578-4be3-98e4-8b15b9c56d5f_1080x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1080,&quot;width&quot;:1080,&quot;resizeWidth&quot;:316,&quot;bytes&quot;:836665,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://theappraisenetwork.substack.com/i/175620551?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff0599a0-2578-4be3-98e4-8b15b9c56d5f_1080x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CEDC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff0599a0-2578-4be3-98e4-8b15b9c56d5f_1080x1080.png 424w, https://substackcdn.com/image/fetch/$s_!CEDC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff0599a0-2578-4be3-98e4-8b15b9c56d5f_1080x1080.png 848w, https://substackcdn.com/image/fetch/$s_!CEDC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff0599a0-2578-4be3-98e4-8b15b9c56d5f_1080x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!CEDC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff0599a0-2578-4be3-98e4-8b15b9c56d5f_1080x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><blockquote><p>&#8220;Good AI policy starts with real-world context. If we want systems that work for people, we need rules shaped by the people they affect.&#8221;</p></blockquote><p><strong>What&#8217;s your role and why is AI policy important to you? </strong><br>At OpenSphere, where I lead legal operations, we build AI-powered tools for immigration law. It&#8217;s a high-stakes environment, dealing directly with people&#8217;s futures and legal rights. AI policy here isn&#8217;t abstract. It shapes how decisions are made, how systems are explained, and how responsibility is assigned. It&#8217;s not just about risk mitigation but about asking who this technology serves and who it might overlook.</p><p><strong>How do you help OpenSphere meet evolving AI regulations in India and globally?</strong></p><p>My role sits at the intersection of compliance, ethics, and operational strategy. I work closely with our product and engineering teams to ensure that our systems prioritize transparency, data integrity, and user accountability from the design stage onward.<br><br>Given that our core users interact with the U.S. immigration system, we keep our workflows closely aligned with US Citizenship and Immigration Services (USCIS) expectations. This includes reviewing how our AI-assisted outputs support evidence compilation, follow documentation norms, and maintain clarity in petition structures. We aim to ensure that every AI recommendation is auditable, explainable, and ultimately supports human legal judgment within the framework USCIS requires.</p><p><strong>What legal and ethical frameworks guide your approach?</strong></p><p>I rely on a combination of local data protection norms and international human rights-based frameworks. The Indian IT Act and pending DPDP Act shape baseline compliance, while documents like the UNESCO Recommendation on the Ethics of AI and the OECD AI Principles inform how we think about fairness, accountability, and human oversight.<br><br>In practice, this means setting up clear consent flows, limiting data collection to purpose-specific use, and maintaining documentation that explains how models are trained and refined. At OpenSphere, we are building explainability features not just for compliance but also to help attorneys and clients understand how recommendations are generated and where human review fits in.</p><p><strong>How do you help ensure responsible AI development and deployment?</strong></p><p>By getting involved early in the design and development process. At OpenSphere, legal and ethical reviews are not final-stage checks. They are part of the core design process.<br><br>We create internal review processes that bring legal, technical, and operational teams together. This ensures difficult questions are asked before building: What data is really needed? What harms could emerge? Are there fallback options if the model fails? Embedding these conversations into project workflows helps shift responsibility from afterthought to default.</p><p><strong>What do you think of AI policy and how could it be improved?</strong></p><p>There&#8217;s a lot to appreciate in the current landscape. India is moving toward formal data protection law, and globally, frameworks like the EU AI Act are setting important precedents. There&#8217;s also more awareness now about systemic risk and the need to include impacted communities in policymaking.<br><br>At the same time, much of the regulation remains broad and interpretive. We need more sector-specific guidance, especially in domains like healthcare, education, and finance. Startups and smaller organizations often want to follow best practices but lack clarity on what exactly those are.<br><br>Globally, I think we need more focus on implementation. There are plenty of high-level principles. The gap is in making those principles usable on a day-to-day basis, especially for product, legal, and operations teams.</p><p><strong>How do you think AI policy should develop in future to meet the challenges and opportunities of AI?</strong></p><p>AI policy needs to become more practical, accessible, and rooted in lived experience. That means more guidance, not just regulation. Templates, checklists, case studies, and community-driven audits could go a long way in translating policy goals into everyday decisions.<br><br>We also need broader participation in shaping policy. Communities affected by automated decisions, domain experts, and small innovators all need a seat at the table. If we want AI systems that are safe, inclusive, and reliable, we need AI policy that is participatory and grounded in context.</p>]]></content:encoded></item><item><title><![CDATA[Five things to know about the Paris AI Action Summit ]]></title><description><![CDATA[By Audrey Hingle.]]></description><link>https://www.appraisenetwork.ai/p/five-things-to-know-about-the-paris</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/five-things-to-know-about-the-paris</guid><dc:creator><![CDATA[Audrey Hingle]]></dc:creator><pubDate>Wed, 05 Feb 2025 14:05:00 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1526821799652-2dc51675628e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMHx8cGFyaXN8ZW58MHx8fHwxNzYwMDE0MTE2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>By Audrey Hingle. Originally published in a shorter version in  <a href="https://internet.exchangepoint.tech/deepseek-and-the-creative-power-of-constraints-doing-more-with-less-compute/"> Internet Exchange</a> and adapted for this format.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1526821799652-2dc51675628e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMHx8cGFyaXN8ZW58MHx8fHwxNzYwMDE0MTE2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1526821799652-2dc51675628e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMHx8cGFyaXN8ZW58MHx8fHwxNzYwMDE0MTE2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1526821799652-2dc51675628e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMHx8cGFyaXN8ZW58MHx8fHwxNzYwMDE0MTE2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1526821799652-2dc51675628e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMHx8cGFyaXN8ZW58MHx8fHwxNzYwMDE0MTE2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1526821799652-2dc51675628e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMHx8cGFyaXN8ZW58MHx8fHwxNzYwMDE0MTE2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1526821799652-2dc51675628e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMHx8cGFyaXN8ZW58MHx8fHwxNzYwMDE0MTE2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="5760" height="2660" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1526821799652-2dc51675628e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMHx8cGFyaXN8ZW58MHx8fHwxNzYwMDE0MTE2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2660,&quot;width&quot;:5760,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;buildings during nighttime scenery&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="buildings during nighttime scenery" title="buildings during nighttime scenery" srcset="https://images.unsplash.com/photo-1526821799652-2dc51675628e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMHx8cGFyaXN8ZW58MHx8fHwxNzYwMDE0MTE2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1526821799652-2dc51675628e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMHx8cGFyaXN8ZW58MHx8fHwxNzYwMDE0MTE2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1526821799652-2dc51675628e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMHx8cGFyaXN8ZW58MHx8fHwxNzYwMDE0MTE2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1526821799652-2dc51675628e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyMHx8cGFyaXN8ZW58MHx8fHwxNzYwMDE0MTE2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@lucamicheli">Luca Micheli</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>On <strong>February 10-11</strong>, France will host the <a href="https://www.elysee.fr/en/sommet-pour-l-action-sur-l-ia?ref=internet.exchangepoint.tech">Paris AI Action Summit</a>, the next step in global AI coordination following the UK&#8217;s 2023<a href="https://www.gov.uk/government/topical-events/ai-safety-summit-2023?ref=internet.exchangepoint.tech"> Bletchley Park AI Safety Summit</a> and the<a href="https://aiseoulsummit.kr/?ref=internet.exchangepoint.tech"> Seoul AI Summit</a> in May 2024.</p><p>While previous summits emphasized AI safety, the Paris AI Action Summit aims to broaden discussions to include governance, democracy, defense, and the economy. Here&#8217;s what to watch for:</p><p><strong>1. A Shift from Risk to Governance</strong></p><p>The <strong>Bletchley Park Summit</strong>, hosted by the UK government in 2023 centered on long-term AI risks which led to <a href="https://www.gov.uk/government/news/historic-first-as-companies-spanning-north-america-asia-europe-and-middle-east-agree-safety-commitments-on-development-of-ai">voluntary safety commitments</a> from major AI companies. The Seoul summit focused on near-term AI risks, regulation, and cooperation, leading to the <a href="https://www.korea.net/NewsFocus/policies/view?articleId=251833">Seoul Declaration</a>, more voluntary safety commitments, and a global AI safety research network. France is broadening the scope to include AI&#8217;s impact on democracy, defense, and the economy. <a href="https://www.elysee.fr/en/sommet-pour-l-action-sur-l-ia?ref=internet.exchangepoint.tech">More on the official website.</a></p><p><strong>2. India&#8217;s Role as Co-Chair</strong></p><p>At France&#8217;s request, India will co-chair the event. Prime Minister Narendra Modi is likely to advocate for global AI governance frameworks with an emphasis on accessibility and ethical AI development, particularly for the Global South. India is home to a <a href="https://www.fortuneindia.com/technology/indias-ai-ambitions-are-we-really-falling-behind-the-us-and-china-or-is-the-picture-more-complex/120434">rapidly expanding AI industry</a>, with major investments in AI-driven healthcare, finance, and government services.</p><p>Earlier this week, OpenAI CEO <a href="https://www.reuters.com/technology/openais-altman-meets-with-india-it-minister-discuss-countrys-ai-plans-2025-02-05/">Sam Altman met with India&#8217;s IT Minister</a>, Ashwini Vaishnaw, to discuss India&#8217;s strategy for developing a comprehensive and affordable AI ecosystem. Altman expressed OpenAI&#8217;s willingness to collaborate and highlighted India&#8217;s rapid adoption of AI technologies&#8212;the country&#8217;s user base for OpenAI products has tripled in the past year, making India its second-largest market. As India navigates its role between Western AI regulatory approaches and the priorities of developing nations, it is <a href="https://carnegieendowment.org/posts/2024/09/disrupting-ai-safety-institutes-the-india-way?lang=en">emerging as a bridge</a> in global AI governance.</p><p>Beyond AI, Modi&#8217;s visit will also focus on strengthening India-France relations, with discussions expected to finalize significant <a href="https://www.indiatoday.in/india/story/pm-modi-upcoming-france-visit-rafale-m-scorpene-deals-fast-track-emmanuel-macron-2674272-2025-02-03">defense agreements</a>, <a href="https://www.thehindu.com/news/national/india-france-discuss-high-tech-cooperation-civil-nuclear-issues-ahead-of-modi-visit-in-february/article69124019.ece">enhance cooperation</a> in high-technology sectors and address civil nuclear issues. <a href="https://www.lemonde.fr/en/international/article/2025/01/15/india-to-co-chair-paris-ai-summit-in-february_6737072_4.html?ref=internet.exchangepoint.tech">Read about India&#8217;s role</a> at the summit and<a href="https://www.business-standard.com/india-news/pm-narendra-modi-may-push-for-global-guidelines-on-ai-at-paris-summit-125020401523_1.html?ref=internet.exchangepoint.tech"> Modi&#8217;s AI agenda</a>.</p><p><strong>3. A High-Profile Guest List</strong></p><p>The summit will bring together nearly 1,000 key figures, including tech leaders from Alphabet, Microsoft, OpenAI, and Anthropic, as well as heads of state and government officials shaping AI policy in leading nations. It will also feature think tanks, campaign groups, and research institutes focused on AI governance, alongside artists and cultural figures, reflecting AI&#8217;s growing influence on creativity and media. <a href="https://www.reuters.com/technology/artificial-intelligence/trump-deepseek-focus-nations-gather-paris-ai-summit-2025-02-05/?ref=internet.exchangepoint.tech">Learn more</a>.</p><p><strong>4. Focus on Open-Source AI &amp; Clean Energy</strong></p><p>One of the most anticipated discussions at the summit will be France&#8217;s push for open-source AI and sustainable computing. Unlike companies such as OpenAI and Google DeepMind, which keep their most advanced models proprietary, France has been actively supporting open-source AI development. Additionally, France is expected to advocate for sustainable AI infrastructure, including energy-efficient AI models that require less compute power, regulations on AI&#8217;s environmental impact, and development of green data centers to reduce AI&#8217;s carbon footprint. <a href="https://www.reuters.com/technology/artificial-intelligence/trump-deepseek-focus-nations-gather-paris-ai-summit-2025-02-05/?ref=internet.exchangepoint.tech">More on this</a>.</p><p><strong>5. Fringe Events in London &amp; Paris</strong></p><p>Alongside the main summit, independent organizations are hosting a two-day event at the British Library in London and partner-led discussions in Paris on AI policy, ethics, and open-source development. The event will explore key themes and outcomes from the Summit, focusing on their impact on policymakers, businesses, and citizens. It will also revisit AI ecosystem challenges in the UK, building on insights from previous AI Fringe events. <a href="https://aifringe.org/">Find details</a>.</p>]]></content:encoded></item><item><title><![CDATA[In conversation with Ben Lyons, Director of Policy and Public Affairs at Darktrace]]></title><description><![CDATA[Founded in 2013 by experts in AI and cyber defence, Darktrace is a global leader in cybersecurity AI, delivering the essential cybersecurity platform to protect organizations today and for an ever-changing future.]]></description><link>https://www.appraisenetwork.ai/p/in-conversation-with-ben-lyons-director</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/in-conversation-with-ben-lyons-director</guid><dc:creator><![CDATA[James Boyd-Wallis]]></dc:creator><pubDate>Fri, 17 Jan 2025 14:21:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8ZKE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2221c3-e231-4a59-a052-a048c2761ff2_1370x1370.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Founded in 2013 by experts in AI and cyber defence, Darktrace is a global leader in cybersecurity AI, delivering the essential cybersecurity platform to protect organizations today and for an ever-changing future.</p><p>We talk to the firm&#8217;s Director of Policy and Public Affairs, Ben Lyons, about his role and the AI policy landscape.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8ZKE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2221c3-e231-4a59-a052-a048c2761ff2_1370x1370.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8ZKE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2221c3-e231-4a59-a052-a048c2761ff2_1370x1370.jpeg 424w, https://substackcdn.com/image/fetch/$s_!8ZKE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2221c3-e231-4a59-a052-a048c2761ff2_1370x1370.jpeg 848w, https://substackcdn.com/image/fetch/$s_!8ZKE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2221c3-e231-4a59-a052-a048c2761ff2_1370x1370.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!8ZKE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2221c3-e231-4a59-a052-a048c2761ff2_1370x1370.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8ZKE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2221c3-e231-4a59-a052-a048c2761ff2_1370x1370.jpeg" width="292" height="292" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7b2221c3-e231-4a59-a052-a048c2761ff2_1370x1370.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1370,&quot;width&quot;:1370,&quot;resizeWidth&quot;:292,&quot;bytes&quot;:427659,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://theappraisenetwork.substack.com/i/175619826?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F087876c9-3063-4178-8cfe-5f68f5f9f521_1370x2147.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8ZKE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2221c3-e231-4a59-a052-a048c2761ff2_1370x1370.jpeg 424w, https://substackcdn.com/image/fetch/$s_!8ZKE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2221c3-e231-4a59-a052-a048c2761ff2_1370x1370.jpeg 848w, https://substackcdn.com/image/fetch/$s_!8ZKE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2221c3-e231-4a59-a052-a048c2761ff2_1370x1370.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!8ZKE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b2221c3-e231-4a59-a052-a048c2761ff2_1370x1370.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Tell us about your role at Darktrace.</strong></p><p><em>I lead our policy and public affairs function. Darktrace is a global AI cybersecurity company operating at the intersection of several important policy debates. My role involves bridging the gap between technologists, policymakers, and cyber analysts in different regulatory jurisdictions. This involves translation &#8211; between technical and policy communities; and sharing learning from different countries &#8211; to help build a more secure future.</em></p><p><strong>What attracted you to cybersecurity and AI?</strong></p><p><em>My career has been split between tech and telecoms in the private sector, and I have worked on AI and data in the UK government&#8217;s Department for Science, Innovation, and Technology (DSIT). Joining Darktrace was the perfect opportunity to bring those experiences together for a high-growth, innovative company addressing a crucial global challenge. Cybersecurity is critical to ensuring institutions, businesses, and people can trust technology to deliver its potential for economic growth and better public services. AI can play a transformative role in improving cyber resilience, enabling trust, and unlocking opportunities for citizens and consumers. Supporting these goals is what makes my role so exciting.</em></p><p><strong>Darktrace operates globally. How do you navigate competing regulatory regimes?</strong></p><p><em>It&#8217;s about striking the right balance between tracking regulatory developments and engaging proactively. On the one hand, we&#8217;ve built robust tracking processes to monitor formal developments in our priority markets. But beyond reacting, we engage with governments, academia, and civil society to anticipate and contribute to emerging debates. This dual approach &#8211; short-term responsiveness and medium-term foresight &#8211; helps us to navigate the evolving regulatory landscape.</em></p><p><strong>What are the key AI policy priorities for the UK?</strong></p><p><em>The UK government is taking a largely sector-led approach, although it is planning to also pursue a separate regime for the most powerful AI models. This approach might include strengthening the AI Safety Institute.</em></p><p><em>Building state capability will be essential. The Labour Government&#8217;s proposal to introduce the Regulatory Innovation Office may help streamline regulatory processes. Upskilling regulators and fostering AI assurance frameworks are other critical areas. Effective regulation isn&#8217;t just what&#8217;s written down in legislation &#8211; it&#8217;s also about ensuring the regulators themselves have the mandate and skills to adapt with technology and partner with industry to drive responsible innovation.</em></p><p><strong>How would you like to see AI policy develop in the UK?</strong></p><p><em>The UK has a massive opportunity to be an AI leader. AI can drive economic growth, improve public services, and tackle long-standing challenges like sluggish productivity.</em></p><p><em>The UK has many strengths. These include brilliant universities, a strong venture funding ecosystem, and accomplished developers. It can be one of the best places to build an innovative AI company today.</em></p><p><em>On the economic front, AI should be at the centre of the government&#8217;s upcoming industrial strategy. This isn&#8217;t just about creating a bigger AI industry, but also about ensuring that companies in other industries are able to harness AI to become more productive. In the context of industrial strategy, this means tackling the barriers to AI use across the UK, and supporting identified priority sectors to help them drive adoption.</em></p><p><em>For the public sector, DSIT&#8217;s Digital Centre for Government has an opportunity to use tech to make services more effective, accessible and focused on people. This will require strong senior-level support and commitment across departments.</em></p><p><strong>What role does Darktrace play in this landscape?</strong></p><p><em>We&#8217;re AI optimists, and we&#8217;ll bang the drum for policy that supports innovation and adoption. But we&#8217;re only going to be realise the potential of technology if it&#8217;s secure. Cybersecurity is a pre-requisite for reliable, privacy-preserving AI use that is ultimately trustworthy.</em></p><p><em>And specifically within the cyber domain, AI is a double-edged sword. On the attacker side, generative AI is being weaponised for reconnaissance and to mount sophisticated social engineering attacks. On the defender side, companies like Darktrace are helping organisations fight back with multi-layered AI to enable continuous monitoring for threats, anomaly detection, and autonomous response.</em></p><p><em>Often we find ourselves acting as a bridge between the AI and cyber policy communities. We can&#8217;t think of these domains in isolation!</em></p>]]></content:encoded></item><item><title><![CDATA[Interview: Closing the digital divide]]></title><description><![CDATA[An interview with Liz Williams MBE, Chief Executive of FutureDotNow &#8211; an organisation addressing the digital capability gap among employees in the UK.]]></description><link>https://www.appraisenetwork.ai/p/interview-closing-the-digital-divide</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/interview-closing-the-digital-divide</guid><dc:creator><![CDATA[James Boyd-Wallis]]></dc:creator><pubDate>Sun, 24 Nov 2024 09:36:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!HLM0!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf6be2d3-43ed-4a82-b156-07f1204bf521_960x960.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>An interview with Liz Williams MBE, Chief Executive of <a href="https://futuredotnow.uk/">FutureDotNow</a> &#8211; an organisation addressing the digital capability gap among employees in the UK.</strong></p><p>The digital skills gap in the UK workplace is a significant barrier to progress, preventing many employees from thriving in the digital economy. The rise of artificial intelligence (AI) has made these skills even more essential. We spoke to Liz Williams MBE about the current state of digital skills and how business and Government can begin to address these challenges in the age of AI.</p><p><strong>What&#8217;s the current state of digital skills in the UK workforce?</strong></p><p>All workers need foundational digital skills, as identified through the <a href="https://futuredotnow.uk/about-us/the-essential-digital-skills-framework/">Essential Digital Skills Framework</a>, developed jointly by industry and Government. The framework includes 20 tasks agreed to be essential for work and life, addressing areas like online safety, security, productivity, and collaboration.</p><p>However, the Lloyds Bank Consumer Digital Index shows that 54% of working-age adults are missing digital basics around safety, productivity, and more. Nearly 22 million working-age adults can&#8217;t complete all the digital tasks essential for today&#8217;s workforce. Moreover, nearly 2 million people in the UK cannot complete any of the essential digital tasks. This gap is significant because as AI continues to automate less-skilled jobs, workers will need to upskill to remain employable. Raising the floor means ensuring everyone has these digital foundations as a starting point.</p><p><strong>What are the most important skills for workers to develop?</strong></p><p>We have a &#8220;hidden middle&#8221; in the workforce. These individuals fall between highly skilled professionals and those entirely offline, and they are often overlooked in efforts to build digital capability. For instance, many initiatives to help small and medium enterprises (SMEs) leverage technology have struggled because their digital skills gaps mirror those of society at large. One of the biggest skill gaps is around staying safe online. Given growing cybersecurity risks, businesses can focus on this as a starting point. But to do this, we need to raise awareness of where people are today when it comes to their digital capabilities and confidence.</p><p>That&#8217;s why we need a collective effort between businesses and the Government. We need a &#8216;great digital catch-up&#8217;, to raise awareness of the skills gap and take concrete steps to address it through collaboration.</p><p><strong>How does AI fit into the digital skills debate, and how well prepared is the UK workforce for the AI revolution?</strong></p><p>AI and digital skills are tightly interconnected. While businesses are eager to capitalise on AI&#8217;s benefits, there is a lag in ensuring workers have the core digital skills to keep up. As we&#8217;ve discussed, the issue is not AI but a broader problem. UK workers are not building the full suite of digital skills quickly enough. That&#8217;s why we must invest helping people build the digital skills needed to fully participate in the modern economy and remain employable over time.</p><p>Given this, the Government and business must prepare the UK workforce for the disruptions AI will bring. While businesses are focused on the productivity gains AI offers, not enough attention is being paid to the human impact. Jobs will be displaced, and many workers need more digital literacy to transition into new roles.</p><p>AI is a tale of two stories: one of technological advancement and productivity and another of human displacement and lack of preparedness. We need to pay more attention to the second story. We must learn lessons from history, from the Industrial Revolution, to ensure we bring worker with us in this digital transformation.</p><p>Those leading the AI charge need to be more aware than anybody else of the digital realities of the UK. We tend to build for the digital society we aspire to be rather than where general population is today when it comes to digital capability.</p><p>Facts like:</p><ul><li><p>1.9 million UK households find it difficult to afford mobile data, and 1.4 million households struggle to afford broadband. (Source: <a href="https://www.ofcom.org.uk/phones-and-broadband/saving-money/affordability-tracker/">Communications Affordability Tracker</a>)</p></li><li><p>2.1m adults are offline with essentially no digital skills. 15% of those offline are under 50. (Source: <a href="https://www.lloydsbank.com/consumer-digital-index.html">Lloyds Bank 2023 UK Consumer Digital Index</a>)</p></li><li><p>3.7m households with children are below the Minimum Digital Living Standard. (Source: <a href="https://mdls.org.uk/publications/">Minimum Digital Living Standard Findings Overview</a>)</p></li><li><p>8.5m adults lack the full set of digital Foundation skills &#8211; these are the very basics, like turning on a PC, using a mouse or finding and connecting to Wi-Fi; 1.3m can&#8217;t do any tasks at this level. (Source: <a href="https://www.lloydsbank.com/consumer-digital-index.html">Lloyds Bank 2023 UK Consumer Digital Index</a>)</p></li><li><p>4.4m adults lack the full set of Essential Digital Skills for Life&#8211; skills like being able to transact online or set up and use an email account; 1.5m adults can&#8217;t do any tasks at this level. (Source: <a href="https://www.lloydsbank.com/consumer-digital-index.html">Lloyds Bank 2023 UK Consumer Digital Index</a>)</p></li></ul><p>Those are digital realities in the UK today. And when you think about the working population and the change AI is going to bring to the world of work, it&#8217;s really important AI developers and creators are acutely aware of where people are now, so we don&#8217;t simply leave people behind.</p><p>FutureDotNow is working with the Turing Institute on how we can combine the essential digital skills UK workers need for AI with the essential digital skills they need for work and life. This is especially important with AI because the technology is going to be pushing boundaries. This is where lifelong learning comes in. But such lifelong training needs to be short and accessible.</p><p><strong>How do we tackle the digital skills gap in the UK Workforce?</strong></p><p>At FutureDotNow, we&#8217;re encouraging businesses to sign up for the <a href="https://futuredotnow.uk/charter">Workforce Digital Skills Charter</a>, a collective effort to supercharge workforce development. The charter focuses on three key areas:</p><ul><li><p>Raising awareness of the skills gap</p></li><li><p>Driving national change through coordinated efforts, and</p></li><li><p>Empowering people to build digital foundations.</p></li></ul><p>More than 70 organisations are working on our delivery plan in a series of sprints. So, we&#8217;re bringing businesses together to help provide a solution.</p><p>We&#8217;re advocating for a &#8220;national digital catch-up,&#8221; encouraging the Government to start with three low-cost steps:</p><ol><li><p>Set a national ambition we can all share to equip everyone with essential digital skills</p></li><li><p>Clearly define the national minimum digital skill set for workers based on the Essential Digital Skills Framework, and</p></li><li><p>Provide incentives, such as the skills and growth levy, to encourage investment in digital upskilling, to empower and galvanise business action.</p></li></ol><p><strong>How can AI developers, creators or deployers get involved?</strong></p><p>They are such an important community. Everyone who is leading the AI charge needs to understand the digital realities of the UK, where many households and individuals still struggle to meet basic digital standards. So, I&#8217;d encourage them to pay as much attention to preventing workforce displacement as they do to reaping productivity gains from AI. And how AI might be able to close the digital divide in the UK; that would be an exciting prospect!</p><p>AI and digital skills are ultimately crucial for economic growth and social mobility. But, without coordinated action, many jobs could disappear, leaving people without the skills needed to secure new opportunities.</p><p>Through collaboration between businesses and the Government, and focusing on inclusive digital upskilling, the UK can bridge the digital skills gap and ensure a future where everyone can participate in the digital economy.</p><p>For more information on how to get involved, visit <a href="https://futuredotnow.uk/charter/">https://futuredotnow.uk/charter/</a>.</p>]]></content:encoded></item><item><title><![CDATA[Consider the views of many and take time: lessons from the EU AI Act with Kai Zenner]]></title><description><![CDATA[As the EU AI Act comes into force today, 1 August, James Boyd-Wallis spoke to Kai Zenner, Head of Office and Digital Policy Adviser for MEP Axel Voss (EPP group) in the European Parliament, about its implementation and what lessons he has for UK policymakers considering AI legislation.]]></description><link>https://www.appraisenetwork.ai/p/consider-the-views-of-many-and-take</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/consider-the-views-of-many-and-take</guid><dc:creator><![CDATA[James Boyd-Wallis]]></dc:creator><pubDate>Thu, 01 Aug 2024 13:20:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!FNzW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff63f6c04-606b-4629-b850-6b05b5a397ee_800x800.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>As the EU AI Act comes into force today, 1 August, James Boyd-Wallis spoke to Kai Zenner, Head of Office and Digital Policy Adviser for MEP Axel Voss (EPP group) in the European Parliament, about its implementation and what lessons he has for UK policymakers considering AI legislation.</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FNzW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff63f6c04-606b-4629-b850-6b05b5a397ee_800x800.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FNzW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff63f6c04-606b-4629-b850-6b05b5a397ee_800x800.jpeg 424w, https://substackcdn.com/image/fetch/$s_!FNzW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff63f6c04-606b-4629-b850-6b05b5a397ee_800x800.jpeg 848w, https://substackcdn.com/image/fetch/$s_!FNzW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff63f6c04-606b-4629-b850-6b05b5a397ee_800x800.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!FNzW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff63f6c04-606b-4629-b850-6b05b5a397ee_800x800.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FNzW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff63f6c04-606b-4629-b850-6b05b5a397ee_800x800.jpeg" width="292" height="292" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f63f6c04-606b-4629-b850-6b05b5a397ee_800x800.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:800,&quot;width&quot;:800,&quot;resizeWidth&quot;:292,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!FNzW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff63f6c04-606b-4629-b850-6b05b5a397ee_800x800.jpeg 424w, https://substackcdn.com/image/fetch/$s_!FNzW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff63f6c04-606b-4629-b850-6b05b5a397ee_800x800.jpeg 848w, https://substackcdn.com/image/fetch/$s_!FNzW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff63f6c04-606b-4629-b850-6b05b5a397ee_800x800.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!FNzW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff63f6c04-606b-4629-b850-6b05b5a397ee_800x800.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Where are we now with the EU AI Act?</strong></p><p>The EU AI Act has been published in the official EU journal, meaning that it is now law and will become applicable on 1 August. Next, the Act has a few transition periods, meaning parts of the law will come into force at different times. For example, after six months, on 2 February next year, provisions on banned AI systems will take effect, meaning the development and deployment of such AI systems must stop. Then, the next big date is 2 August 2025, when regulation on foundation models will kick in. Finally, the remainder of the Act will become applicable on 2 August 2026. It will be interesting to monitor enforcement after August next year and in 2026 to see if the EU Commission focuses on the big tech platforms or enforces also against smaller homegrown AI firms.</p><p><strong>How is the EU and its member states preparing?</strong></p><p>The European Commission and, to a certain extent, EU member states are busy with two things. First, they are trying to build up an AI governance system. For example, the AI office in the European Commission needs to be built, as does the AI Board, the scientific panel and other bodies that will take care of governance and enforcement at a European level. Within the member states, each will need to develop a regulatory sandbox and designate or appoint national competent authorities to look after the AI Act. All of that requires time, and everyone lacks AI talent. So, there will be a battle to find the brightest people available to fill those positions.</p><p>Second, the Commission and the member states will also need to create a lot of secondary legislation, meaning guidelines, templates, delegated acts, and implementing acts to add specifics to the EU AI Act. This secondary legislation is necessary because the AI Act as a law is broad, even vague, in many chapters and articles. The hope is that these additional documents will specify, for example, how to conduct a risk assessment, among many other areas.</p><p><strong>Is the EU&#8217;s approach to regulating AI sensible?</strong></p><p>The European Parliament wanted to establish horizontal but non-binding AI principles that apply to all AI systems (like the US AI bills of rights) and then complement them with sectorial legislation. The Commission decided differently. Instead, they created one horizontal AI act, which goes into detail and applies to every sector and use case. This approach creates problems because not all AI is the same. For instance, the AI in a hospital has different risks from the AI driving a facial recognition system for CCTV. My privacy may be violated in the second case.</p><p>The UK and the US approach might be better, and the EU may see problems. However, it depends on the merits of our secondary legislation and whether those documents specify differentiating legal applications in different use cases and sectors. If the technical harmonized standards cannot do that, then it may lead to innovation being stifled or hampered, especially among smaller companies that cannot afford the cost of compliance. In this case, European companies will have a competitive disadvantage towards UK and US start-ups and SMEs. This disadvantage is a significant risk with our prescriptive AI Act.</p><p><strong>What lessons do you have for UK policymakers considering how to regulate AI?</strong></p><p><em>O</em>ne of the strengths of the UK is the cooperative approach. For instance, in data protection, the ICO is one of the few data protection authorities talking to everyone whether civil society, industry, academics or other stakeholders. I also see this with the Competition and Markets Authority and its approach to foundation models, where they have a careful, very evidence-based strategy.</p><p>This contrasts with the EU&#8217;s approach to foundation models where several decisions have been taken because of the French startup Mistral and the German start-up Aleph Alpha. So, my first lesson would be for UK policymakers to continue talking to and considering the input from many stakeholders to avoid a similar situation. Not one company or organisation should have an outsized impact on any legislation. Policymakers must also remain agile and check if proposals benefit all rather than just one business or sector.</p><p>Next, I would urge policymakers to take their time with AI regulation. While the European Union started the process in 2014, the trialogue phase, where the Commission, Council and Parliament debate the outstanding issues, took only three to four months. That was way too fast. Many issues were not discussed, which is one of the reasons why the quality of the final legal text is not high. There are too many vague points and contradictions.</p>]]></content:encoded></item><item><title><![CDATA[How AI narratives shape policy and public opinion]]></title><description><![CDATA[An interview with Dr Kerry McInerney, Research Fellow at the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge by James Boyd-Wallis.]]></description><link>https://www.appraisenetwork.ai/p/how-ai-narratives-shape-policy-and</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/how-ai-narratives-shape-policy-and</guid><dc:creator><![CDATA[James Boyd-Wallis]]></dc:creator><pubDate>Tue, 25 Jun 2024 08:39:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!HLM0!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf6be2d3-43ed-4a82-b156-07f1204bf521_960x960.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>An interview with Dr Kerry McInerney, Research Fellow at the <a href="http://lcfi.ac.uk/">Leverhulme Centre for the Future of Intelligence (CFI)</a> at the University of Cambridge by James Boyd-Wallis.</strong></p><p>As anyone in communications will attest, narratives matter. So, how we talk about artificial intelligence (AI) and its risks and benefits can significantly influence its development, regulation and place in public opinion. And from <a href="https://www.cam.ac.uk/stories/ai-narratives">Homer to HAL</a>, we have been sharing stories about AI long before we had computers.</p><p>Given the increasing exposure and adoption of the technology, and as policymakers consider legislation, I spoke to Dr Kerry McInerney to explore some of these narratives, metaphors, and analogies and their impact on policy.</p><p><em>Photo credit: Jason Sheldon / Junction 10 photography</em></p><p>&#8220;Currently, some people talk about artificial intelligence as a revolutionary technology, representing a complete break from the past. We see that in many national AI strategies. However, many are pushing back on this narrative and are instead drawing comparisons to previous technologies,&#8221; explains Dr McInerney, who co-leads <a href="http://lcfi.ac.uk/projects/ai-futures-and-responsibility/global-politics-ai/">the Global Politics of AI</a> project at the CFI investigating how artificial intelligence is reshaping international relations.</p><p>&#8220;Metaphors can be a good strategy for drawing people towards legislation and governance mechanisms. For instance, one of the analogies we hear among policymakers and the media is whether AI is the new nuclear. But we need to probe the limits of this comparison,&#8221; says Dr McInerney.</p><p>&#8220;We could argue that the governance of nuclear weapons has been a success story. We have reached global agreements and alliances preventing their proliferation.&#8221;</p><p>Such successes may suggest a parallel path for managing the societal impact of AI. &#8220;However, while this analogy may be useful for policymakers seeking to create international governance regimes for AI, this narrative of success often overshadows the terrible impact of nuclear testing on people and the environment worldwide,&#8221; argues Dr McInerney. So, the situation is more nuanced than a simplistic comparison.</p><p>There are additional historical analogies now gaining traction. &#8220;We see the rhetoric of an AI arms race, being put forward by the defence establishment, policy officials and Big Tech,&#8221; says Dr McInerney, also an <a href="https://www.bbc.com/mediacentre/2023/new-generation-thinkers">AHRC/BBC New Generation Thinker</a>.</p><p>This narrative is already impacting policy where the U.S. government has banned the sale of semiconductor chips used for artificial intelligence to China, for instance.</p><p>However, when global governance is crucial to managing the societal impact of artificial intelligence, and this &#8216;arms race&#8217; analogy could entrench international competition over collaboration, we should challenge its assumptions. Are tech companies <a href="https://www.noemamag.com/the-bumpy-road-toward-global-ai-governance/">amplifying this argument</a> to stifle global competition, push for less domestic regulation and develop and release new products at speed and scale?</p><p>Alongside the global politics of AI, Dr McInerney also explores the intersection between race, gender and artificial intelligence.</p><p>She recently co-edited the book <em>The <a href="https://www.amazon.co.uk/Good-Robot-Technology-Feminism-Humanities/dp/1350399957">Good Robot: why technology needs feminism</a> </em>with Dr Eleanor Drage, which looks at how feminism can help us work towards &#8216;good&#8217; technology.</p><p>Through the voices of leading feminist thinkers, activists and technologists, the book demonstrates the efforts of academics and researchers such as Dr McInerney to cultivate a critical and informed public conversation about technological developments.</p><p>Given the importance of charting the right path for AI, we should heed what Dr McInerney and her colleagues say about how narratives shape policy, public opinion and the technology&#8217;s development.</p>]]></content:encoded></item><item><title><![CDATA[British AI debates: “The limits of voluntary agreements” | SZ Dossier]]></title><description><![CDATA[This interview was first published in the SZ Dossier on 13 June 2024, a S&#252;ddeutsche Zeitung newsletter.]]></description><link>https://www.appraisenetwork.ai/p/british-ai-debates-the-limits-of</link><guid isPermaLink="false">https://www.appraisenetwork.ai/p/british-ai-debates-the-limits-of</guid><dc:creator><![CDATA[Aidan Muller]]></dc:creator><pubDate>Thu, 13 Jun 2024 11:37:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/90161e37-b69c-4bc0-95fe-b778a75179bc_5009x3339.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This interview was first published in the SZ Dossier on 13 June 2024, a S&#252;ddeutsche Zeitung newsletter.</em></p><p><strong>Now that the European elections are over, more eyes will turn to the United Kingdom, where Prime Minister Rishi Sunak of the Tories will face his challenger Keir Starmer of the Labour Party in early July.</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1486299267070-83823f5448dd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxwYXJsaWFtZW50fGVufDB8fHx8MTc2MDQ1MTI3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1486299267070-83823f5448dd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxwYXJsaWFtZW50fGVufDB8fHx8MTc2MDQ1MTI3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1486299267070-83823f5448dd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxwYXJsaWFtZW50fGVufDB8fHx8MTc2MDQ1MTI3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1486299267070-83823f5448dd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxwYXJsaWFtZW50fGVufDB8fHx8MTc2MDQ1MTI3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1486299267070-83823f5448dd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxwYXJsaWFtZW50fGVufDB8fHx8MTc2MDQ1MTI3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1486299267070-83823f5448dd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxwYXJsaWFtZW50fGVufDB8fHx8MTc2MDQ1MTI3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="5009" height="3339" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1486299267070-83823f5448dd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxwYXJsaWFtZW50fGVufDB8fHx8MTc2MDQ1MTI3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3339,&quot;width&quot;:5009,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Big Ben, London&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Big Ben, London" title="Big Ben, London" srcset="https://images.unsplash.com/photo-1486299267070-83823f5448dd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxwYXJsaWFtZW50fGVufDB8fHx8MTc2MDQ1MTI3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1486299267070-83823f5448dd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxwYXJsaWFtZW50fGVufDB8fHx8MTc2MDQ1MTI3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1486299267070-83823f5448dd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxwYXJsaWFtZW50fGVufDB8fHx8MTc2MDQ1MTI3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1486299267070-83823f5448dd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxwYXJsaWFtZW50fGVufDB8fHx8MTc2MDQ1MTI3OXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@marcin">Marcin Nowak</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>According to polls, Starmer will probably succeed Sunak, even though trouble has been brewing in the party.</p><p>And as if internal strife wasn&#8217;t enough, deepfakes are also joining in. For example, a video clip of Labour MP Wes Streeting &#8211; doctored to make him seem to call his party colleague Diane Abbot a &#8220;stupid woman&#8221; &#8211; has been doing the rounds. This didn&#8217;t actually happen, of course, but the video is circulating on X, even though even the platform now notes that it&#8217;s manipulation.</p><p>&#8220;However, these remain isolated cases &#8211; not enough to demonstrate the urgency of the problem,&#8221; said Aidan Muller, co-founder of Appraise Network, a platform to promote dialogue about AI in the United Kingdom. From his point of view, it is &#8220;ultimately inevitable&#8221; that the flood of deepfakes and misinformation will lead to serious problems in election campaigns and beyond, Muller told SZ Dossier in the Tea Room of the Conrad Hotel at St. James&#8217;s Park in Westminster.</p><p>He began taking an interest in the topic seven years ago: &#8220;The trigger for this was the Brexit referendum and the election of Donald Trump in the US.&#8221; He felt at the time that society&#8217;s relationship to truth had fundamentally changed. &#8220;The assumption that facts will speak for themselves is a fallacy today.&#8221;</p><p>AI has exacerbated the situation. He and his co-founder have been accompanying the AI debate in the United Kingdom &#8211; from the initial euphoria surrounding the launch of ChatGPT in late autumn 2022, to the hysteria half a year later which included open letters warning about AI as if it were the new climate change, and to the current and ongoing discussions about opportunities and risks. His organisation came to the conclusion &#8216;that the truth is probably somewhere in between,&#8217; Muller said. &#8216;We are both excited about the technology, but also cautious about the pitfalls.&#8217;</p><p>The incumbent British government is trying to channel the conversations, as Muller acknowledged approvingly. &#8220;One of the goals with these summits is the formalisation of a multi-stakeholder engagement process,&#8221; he said about the AI Seoul Summit recently held by London in South Korea together with the government there (as reported by SZ Dossier). They want to create a platform that allows governments of different countries to talk to each other, but also with technology experts and various civil society groups.</p><p>&#8220;I think the current government has tried to secure a place at the table and set the agenda,&#8221; Muller said. One could argue about how successful this has been so far, but from his point of view, it has been &#8220;reasonably successful in that it has set a really important process in motion.&#8221; There had been sufficient criticism, including about an allegedly one-sidedly positive portrayal of AI, as well as about guest lists with too few representatives from civil society. But the summit had been useful &#8220;in terms of starting a discussion,&#8221; he said.</p><blockquote><p>&#8220;There is a lack of trust in companies to do the right thing, and a lack of trust in the government to regulate the area appropriately.&#8221;</p></blockquote><p>&#8212; Aidan Muller, Co-founder of The Appraise Network</p><p>Regarding AI regulation in the United Kingdom, the incumbent government has proposed &#8220;a kind of context-based, innovation-friendly approach that deliberately holds back and doesn&#8217;t make regulations,&#8221; he said. The recommendation is that there should be no new central AI regulatory authority, but that responsibility should be delegated to existing regulatory authorities in the respective sectors. According to the government, these are best placed to recognise the challenges that AI poses for a particular sector.</p><p>Then last autumn, the conclusion of the AI summit in Bletchley Park, which attracted worldwide interest, was that they would work with all major AI technology companies and ensure that they adequately test their pioneer models before publishing them. However, not much has happened in practice since then, Muller said. &#8220;I think this probably shows the limits of voluntary agreements.&#8221;</p><p>Some in Westminster are of the opinion that no new law is needed because you can&#8217;t do anything with AI that isn&#8217;t already legally punished. But there are existing regulations that are significantly challenged by AI, &#8220;such as our relationship to intellectual property and copyright,&#8221; he said.</p><p>Because of its complexity, AI regulation is not an obvious election campaign topic. However, surveys in recent months have shown that the population is thinking about AI, and is divided into roughly equal groups of enthusiasts, sceptics and undecideds. &#8220;Among the British public, optimism about the technology has been declining,&#8221; Muller said. People are concerned about unemployment and the exacerbation of inequalities. And: &#8220;There is a lack of trust in companies to do the right thing, and a lack of trust in the government to regulate the area appropriately.&#8221;</p><p>So there is some concern, and Prime Minister Sunak will point to the AI summits as a sign that the Tories are taking a leadership role on the global stage, while Labour has not yet published their manifesto, Muller said. The Labour leadership had announced in March that they wanted to publish an AI strategy &#8211; &#8220;and we&#8217;re still waiting for it,&#8221; he said. &#8220;We&#8217;re not really sure what their plans are yet.&#8221; Labour leader Starmer told the BBC on Monday that the party&#8217;s manifesto should be published today.</p><p>How far apart the Tories and Labour actually are when it comes to AI is therefore still unclear, Muller said. But to get a sense of the situation, Appraise Network has conducted some surveys among British MPs about their attitudes towards AI.</p><p>&#8220;One of the biggest differences between the two parties is in their attitude towards unemployment, that is, the potential threat of unemployment as a result of AI,&#8221; Muller said. This fear tends to be more pronounced in the Labour camp. &#8220;This is a natural thing because there is a historically close relationship with the trade unions.&#8221;</p><p><em>Laurenz Gehrke is the editor of SZ Dossier.</em></p>]]></content:encoded></item></channel></rss>