Scores of UK MPs join call to regulate the most powerful AI systems | Artificial Intelligence (AI)

by
0 comments
Scores of UK MPs join call to regulate the most powerful AI systems | Artificial Intelligence (AI)

More than 100 UK MPs are calling on the government to impose binding rules on the most powerful AI systems as concerns grow that ministers are moving too slowly to create safeguards in the face of technology industry lobbying.

A former AI minister and defense secretary are part of a cross-party group of Westminster MPs, peers and elected members of the Scottish, Welsh and Northern Irish legislatures who are calling for tighter controls on frontier systems, citing fears that superintelligent AI would “compromise national and global security”.

The effort for tighter regulation is being coordinated by a non-profit organization called Control AI, whose supporters include Skype co-founder, Jaan Talin. It is calling on Keir Starmer to show independence from Donald Trump’s White House, which opposes regulation of AI. One of the “godfathers” of the technology, Yoshua Bengio, recently said that it is less regulated than Sandwich.

Campaigners include Labor peer and former Defense Secretary Des Browne, who said superintelligent AI would be “the most dangerous technological development since we gained the ability to wage nuclear war”. He said only international cooperation could stop “the reckless pursuit of profit that could endanger us all”.

Conservative peer and former environment minister Zac Goldsmith said that “even though very important and senior people in AI are blowing the whistle, governments are miles behind AI companies and are leaving them to develop it without any regulation”.

The UK hosted an AI safety summit at Bletchley Park in 2023, which concluded that there was “the potential for serious, even catastrophic, harm, intentionally or unintentionally” from the most advanced AI systems. It founded the AI ​​Safety Institute, now called the AI ​​Safety Institute, which has become an internationally respected body. However, there has been less emphasis on the summit’s call to address risks through international cooperation.

Goldsmith said the UK should “reassert its global leadership on AI safety by supporting an international agreement to freeze the development of superintelligence until we know what we are dealing with and how to control it”.

Calls for state intervention in the AI ​​race come after one of Silicon Valley’s leading AI scientists told the Guardian humanity will have until 2030 to decide whether to take the “ultimate risk” of letting AI systems train themselves to become more powerful. Jared Kaplan, co-founder and chief scientist at frontier AI company Anthropic, said: “We really don’t want this to be a Sputnik-like situation, where the government suddenly wakes up and says: Oh, wow, AI is a big deal.”

Labour’s programme, due in July 2024, said it would legislate “to impose requirements on those working to develop the most powerful artificial intelligence models”. But no bill has been published and the government faces pressure from the White House not to impede commercial AI development, which is mostly pioneered by US companies.

A spokesperson for the Department of Science, Innovation and Technology said: “AI is already regulated in the UK, with many existing rules already in place. We are clear on the need to ensure that the UK and its laws are prepared for the challenges and opportunities that AI brings and this position has not changed.”

The Bishop of Oxford, Steven Croft, who is supporting the Control AI campaign, called for an independent AI watchdog to investigate public sector use and to require AI companies to meet minimum testing standards before releasing new models.

“There are all kinds of risks and it seems the government has not adopted the precautionary principle,” he said. “There are significant risks at the moment: the mental health of children and adults, the environmental costs and other big risks in terms of the alignment of generalized AI and (the question of) what’s good for humanity. The government is moving away from regulation.”

Skip past newsletter promotions

Britain’s first AI minister under Rishi Sunak, Jonathan Berry, said the time is coming when binding rules should be imposed on models that present existential risks. He said the rules should be global and would create tripwires so that if AI models reach a certain power their makers would have to show they have been tested, designed with off switches and are capable of being re-trained.

“International frontier AI security has not progressed as quickly as we had hoped,” he said. He cited recent cases of chatbots being involved in encouraging suicides, people using them as therapists and believing they are Gods. “The risks now are very serious and we need to remain constantly vigilant,” he said.

Andrea Miotti, chief executive of Control AI, criticized the current “timid approach” and said: “There has been a lot of lobbying from the UK and US. AI companies are lobbying the governments in the UK and US to prevent regulation, arguing that it is premature and will crush innovation. Some of these are the same companies that say AI could destroy humanity.”

He said the pace at which AI technology is advancing means that mandatory standards may be needed in the next year or two.

“It is very important,” he said.

Related Articles

Leave a Comment