Zeno Thinks: The AI Safety Summit – success or diplomacy?
It’s not often that Kamala Harris, Ursula Von Der Leyen and Elon Musk decide to congregate in Bedfordshire, which demonstrates the importance and pull that the AI Safety Summit created this week.
You’ll already be aware of the column inches that AI has owned this year, often carrying fear as much as optimism and opportunity. Very much the current technology battleground within geopolitics. So it was with cautious optimism that leaders from the US, China, EU and India all descended on Bletchley Park to try and form consensus on the technology’s development. Here we look at what was truly achieved – and how this has been perceived by the media.
Five key takeaways
An actual agreement – The Bletchley Declaration on AI safety was signed by all 28 countries attending, agreeing to the urgent need to understand and collectively manage potential risks, creating shared collaboration and responsibility. Rishi Sunak called it “quite incredible” and while high-level, it is indeed a huge win given the rival factions at the table.
Lack of actual regulation – There’s no real cross-border regulation coming out of the event, which isn’t surprising given it only lasted two days. The UK itself, though, has been accused of contradiction, with Sunak warning that AI companies shouldn’t mark their own homework, while the UK simultaneously feels that regulation isn’t possible due to the speed of development. Robert F. Trager, director of the Oxford Martin AI Governance Initiative at Oxford University, told AI Business the Declaration is "short on details of how countries will cooperate on these issues”.
The US influence remains huge – Joe Biden announced an executive order on Monday to regulate AI, with his deputy Kamala Harris making the case at the event that the US’ policies should serve as a model for global policy. Contrast this with the EU, who started the regulatory process four years ago, with the Commission’s Věra Jourová claiming its impending AI act is “the first ever legislation on AI”. It will be intriguing to see how collaboration develops as global powerhouses also seek to lead. The FT’s John Thornhill is clear on the overshadowing impact the US had: “The Bletchley Park Summit is worthy, but its conclusions will be toothless compared with Biden’s executive order.”
Political win for Sunak – Just a few weeks ago it was expected that, while many major countries would be represented, this would be more at the cabinet minister level. Having names such as Harris & Von Der Leyen flying in, alongside Google, Amazon and Microsoft, has helped Sunak ensure Britain is not a bystander in charting AI’s course. He has matched this with timely local investments too, with the government pledging £118m this week to enable UK universities to avoid a future AI skills gap.
Focal point for news – The UK and US both announced that they are setting up national AI safety centres, with leading AI companies OpenAI and Deepmind among others agreeing to allow governments to test their latest models for national security risks before they’re released. Much like major tech brands save their biggest announcements for trade shows or their own annual events, this inaugural event suggests future summits might become a hotbed of AI news for public and private sector alike.
Media coverage – the Elon effect
- As of the morning of November 3rd (the day after the Summit), there have been over 2,000 UK articles (2,459) about the AI Safety Summit in the previous seven days, securing just under 5,000 (4,934) social interactions.
- The most engaged-with articles are unsurprisingly from national media, as this news leans into wider political agendas:
- Elon Musk tells Sky News AI is a 'risk' to humanity | Science & Tech News | Sky News (388 social interactions)
- Elon Musk set for talks with Rishi Sunak today after AI safety summit | Science & Tech News | Sky News (172 social interactions)
- US announces 'strongest global action yet' on AI safety - BBC News (165 social interactions)
It’s clear that Elon Musk’s presence, announced late on, has been a major driving force in media coverage around this event, whilst the US announcing its “strongest global action yet” on AI safety foreshadowed what would be discussed and agreed at the summit.
While the majority of brand coverage focused on those at the event, media opportunities were not entirely limited to businesses in attendance. Quantexa, an AI-focused company which is the only British business to become a unicorn in the past year, featured significantly in the build-up given its status. The likes of Arup, Nephos Technologies, Profusion and ServiceNow were also able to weigh in with expert reaction and data. Needless to say, making genuine business investments and moves into AI helps to throw weight behind commentary and viewpoints.
What now?
Well, more summits. A South Korea Summit is planned in six months, with an additional gathering in France in one year’s time. Conversation and collaboration will continue at these touch points, but many are calling for constant dialogue. Deepmind co-founder Mustafa Suleyman stated his repeated case for an IPCC-esque panel on Question Time following the summit, involving public and private sector.
Beyond this, though, the sheer amount of coverage the summit forged (as well as the rather surreal moment of having Rishi Sunak and Elon Musk on stage discussing killer robots) has put AI – and concerns around it – into the public consciousness more than ever. And this will continue because AI is not a hype cycle technology.
The conversation is complex and very much geopolitical. But siloes have been opened, at both a diplomatic and business level, which is heartening to say the least. The conversation is evolving rapidly, so those wishing to join in need to evolve with it, as a lack of authenticity or credibility will become increasingly easier to spot.