Rather than focusing on “awakening” focus on safety concerns, Keir Starmer ir strengthens diplomatic relations with Donald Trump’s administration by shifting the UK’s focus on artificial intelligence towards security cooperation. You’re about to do so.
Technical Secretary Peter Kyle announced on Friday that the UK AI Safety Institute, which was established just 15 months ago, will be renamed the AI Security Institute.
The body, given a £50 million budget, will no longer focus on risks related to bias and freedom of speech, but will focus on “advancing understanding of the most serious risks posed by technology.”
Earlier this week, the UK joined the US at the AI Summit in Paris and refused to sign a joint communicae approved by around 60 states, including France, Germany, India and China. , ethical, safe, safe, reliable.”
Officials said the recent moves on AI are part of a broader strategy at a time when the Trump administration is engaged in a trade war with China and the EU. Some believe that adjusting US priorities over AI will help the UK avoid being targeted elsewhere.
At this week’s AI Summit in Paris, US Vice President JD Vance warned against “excessive” AI regulations, saying the country will create a system that “relied from ideological bias.” Meanwhile, Trump’s Elon Musk said at an event held in Dubai on Thursday, “Hypothetically, if AI is designed to think at every cost, too many men will be in power.” “We’re just thinking that’s what we’re doing and doing it,” he said.
The new U.S. ambassador for the UK said his “signature policy” would promote cooperation between the technical sectors of both countries, allowing the two countries to have a “logical advantage” over China.
“If we lose the advanced technology race to China and China in the West, it’s disastrous to gain technical tensions in the West,” Mandelson said, and the “backbone” of the US-UK special relationship He added that it is in defense. Intelligence and security partnership.
The UK’s decision to approach the US with AI has been criticized by technology experts and civil society groups who claim to be isolated from European allies on technology regulation while overestimating what the UK has to offer Masu.
“The US is engaged in AI imperialism,” said Herman Narula, chief executive of the UK-based AI company. “What’s most interesting to them is access to our market. What else do they need to do for?”
To present attractive proposals to the US, the UK will need to make serious concessions on what it can offer, including Laxer rules on inputs used to train AI models and a rigorous approach to GDPR, Narula said. I stated.
At the AI Summit, those who described the US decision not to sign a joint communicae said they did not clearly distinguish between the use of technology by democratic and authoritarian regimes, and the fact that China is the signatories. pointed out.
One Labour MP explained the UK’s decision not to sign the declaration as “a low-cost way to send clear geopolitical signals,” adding that he believes it is “the exact right move.” Ta.
Those close to the UK decision argued that the move was over-interpreted, arguing that it was the result of a limited effort by the French hosts at the summit that was made to secure signatories. .
The UK government said the declaration “did not provide sufficient practical clarity for global governance and does not adequately address more difficult questions about the challenges posed by national security and AI – a key focus for the UK “He said.
When the AI Safety Institute was first launched last year, Prime Minister Rishi Snack said, “The most unlikely but extreme risks, such as humanity losing control of AI due to social harms such as bias and misinformation. “We’ll explore all the risks until.” .
Since then, according to people who have been described on the issue, the priorities have been holding back more clarity from the US government, and have so far refrained from publishing AI safety bills. Ta. The law turns the theoretical voluntary contracts for pre-market testing of AISI models with companies including Meta, Amazon and Openai into legally binding obligations.
“Safety is related to the censorship of social media platforms. We will remove Donald Trump from major platforms,” said Gregory C. Allen, director of the Wadwani AI Center for Strategic and International Studies.
Allen said he wouldn’t be surprised if the US changed its own AISI name in the near future. The body has so far struggled to hire staff amidst the backdrop of deep political uncertainty. Last week it was revealed that Elizabeth Kelly, the institute’s first director, is pausing from her role.
Jakob Mökander, director of science and technology policy at Tony Blair Institute, said that as the UK AISI is “the world’s best funding,” if the US continues to work with the UK, it will continue to have “AI Safety Institute.” But we send all models to the UK for testing.”
Lord Peter Ricketts, a former UK national security adviser and permanent secretary of the Foreign Office, expressed his skepticism that pursuing cooperation in AI is fruitful diplomacy.
“The US AI ecosystem is so vast that the UK can only make small contributions, some of which will be our calling force,” he said. “If we are in conflict with the EU that we will certainly weaken our ability to convene and damage the reset, as we are in a position to oppose the US. [with the EU]. ”
Additional Reports by Chloe Cornish of Dubai