AI is no 'Oppenheimer moment'
This was first published in PR Week
In recent months, we have witnessed both breathless endorsements and panic-stricken headlines about the impact of artificial intelligence. If we believe what we read, it can both save the world and kill us all.
But public affairs and PR professionals should avoid such hyperbole lest it prompt knee-jerk policymaking or a wave of disillusionment in response.
On one end of the spectrum, Alphabet CEO Sundar Pichai has said for years that AI will be “more profound” for humanity than fire and electricity.
At the other end, Oppenheimer director Christopher Nolan, said just last month there were “very strong parallels” between the physicist and the AI experts calling for a slowdown in AI’s development.
And he is not alone. Dame Wendy Hall, regius professor of computer science at the University of Southampton, compared Meta making its foundation model open source to “giving people a template to build a nuclear bomb”.
However, such comparisons are challenging, and they highlight the risk of framing nuanced arguments, such as AI and its regulation, around extreme analogies.
Let’s take the atomic bomb example. While Nolan and others make a persuasive case, the recent AI advancements are not the same as the creation of the nuclear bomb, as argued by Hodan Omaar of the Centre for Data Innovation.
First, deploying a nuclear bomb has only one outcome – widespread destruction. The potential outcomes of AI, on the other hand, are not so binary, argues Omaar. AI can help turbocharge productivity and prevent and treat disease, but it can also ingrain bias and spread deep fakes and misinformation.
Second, AI is currently limited in scope. Artificial general intelligence or “superintelligence” does not exist, and expert opinions on when that might happen differ widely.
There are several lessons for public affairs and communications professionals.
If our comms continues to hype AI, the bubble will balloon and certainly pop. We’ll enter another AI winter, where interest in the technology retreats.
If the analogies we use to communicate AI continue to focus on speculative existential threats, we risk framing AI as only a threat. The result will be knee-jerk policymaking governed by fear.
The potential benefits will be squashed. Several uses for the technology may be entirely prohibited.
Such inflated communication also ignores the benefits and risks of AI in the here and now. What does the use of AI facial recognition technology by the police mean for bias and privacy? How can AI help put patients at the centre of the NHS and improve outcomes?
So, instead, we need greater nuance in our communications and the AI debate more widely.
We must discuss the risks and benefits and consider all possibilities. And we must do that calmly and rationally.
If we frame AI as Satan or Saviour, we may forego both the opportunities and challenges that AI presents.
Ultimately, AI is another, albeit powerful, technology that can, when used in the right way, benefit organisations and society more broadly.
Our communications should reflect that.


