The United Kingdom has announced a significant realignment of its artificial intelligence strategy, marked by a partnership with US-based AI firm Anthropic and a revised mission for its AI safety institute.
The shift signals a clear prioritization of national security concerns over broader societal risks associated with AI.
Less than two years ago, the British government established the UK AI Safety Institute (AISI), tasked with addressing a range of security risks, including the potential for AI to create chemical or biological weapons and the theoretical possibility of a superintelligent AI spiraling out of control.
The AISI also held a partial mandate to examine societal risks such as spreading misinformation and perpetuating bias.
However, on Thursday, the government recast the organization as the AI Security Institute.
While retaining a focus on certain security threats, the reborn AISI will no longer monitor societal risks, and it no longer appears to be focusing on potential for AI running amok.
To underscore this change, the new AISI will feature a “criminal misuse team” that will work in conjunction with the Home Office, the UK.’s security ministry.
Anthropic partnership: transforming public services with AI
Alongside the redefined security focus, the British government has entered into a partnership with Anthropic to explore the use of AI in transforming the country’s public services and accelerating scientific research.
While this is a first for the government, it is not an exclusive deal; the government said it will try to strike similar partnerships with other leading AI companies.
“We look forward to exploring how Anthropic’s AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to U.K. residents,” Anthropic CEO Dario Amodei said in a statement.
No financial terms were mentioned.
Anthropic’s Economic Index, launched this week, will also come into play here.
The index draws on anonymized conversations with Claude to infer how AI is being used across the economy, and the UK will use this information to “adapt its workforce and innovation strategies for an AI-enabled future,” the government said.
Echoes of the US: aligning with Trump’s stance?
The revised mission of the AISI appears to align with a broader shift in the UK’s approach to AI, potentially mirroring a trend in the US under President Donald Trump’s administration.
Earlier this week, the UK caused some consternation in the AI community by refusing to sign the declaration emerging from the Paris AI Action Summit.
The US also declined to sign it.
The US’s reasoning was down to a desire to avoid excessive regulation of AI, the declaration referred to international frameworks and governance, but many saw the document’s references to inclusive AI and reducing digital divides as a guarantee that Trump’s anti-DEI administration wouldn’t sign them.
The UK’s refusal was more of a surprise; its government cited concerns about “global governance” and national security.
This followed a similar move by Trump, who rescinded President Biden’s 2023 executive order that provided guardrails for AI technology.
US Vice President JD Vance told the summit this week that he was not in Paris “to talk about AI safety, which was the title of the conference a couple of years ago,” but rather to talk about “AI opportunity.”
His message was heavy on avoiding being risk-averse when it comes to AI.
A shift in priorities: growth and security above all
On Thursday, UK tech secretary Peter Kyle struck a very similar note, echoing Vance’s emphasis on economic growth and security.
“The changes I’m announcing today represent the logical next step in how we approach responsible AI development—helping us to unleash AI and grow the economy,” he said.
The main job of any government is ensuring its citizens are safe and protected, and I’m confident the expertise our Institute will be able to bring to bear will ensure the UK is in a stronger position than ever to tackle the threat of those who would look to use this technology against us.
The government stressed in its statement that the AISI “will not focus on bias or freedom of speech,” and AISI chair Ian Hogarth insisted that “the Institute’s focus from the start has been on security.”
However, alongside that security focus, the AISI has also explicitly addressed societal issues like the potential for AI to manipulate public opinion, or to reinforce societal biases when used in transport or emergency services systems.
These are things that former Prime Minister Rishi Sunak tasked it to do, and it even invited grant applications covering these very topics.
At the time of publication, the government had not replied to a question about who might monitor AI bias issues now that the AISI would no longer do so.
Fortune has also asked Hogarth why the AISI no longer focuses on societal risks and the potential for future AI getting out of control.
The post Lockstep with US? UK’s new AI strategy sparks debate over global cooperation appeared first on Invezz