AI firms 'want clarity from UK over tests'

2024-02-20
| China Daily Global

share

65c435e7a3104efc33029d30.jpg

[Photo/VCG]

Major global technology companies at the forefront of developing artificial intelligence have reportedly urged the United Kingdom to hurry up with safety tests it is conducting on their software, and to provide them with clarity about what the tests hope to achieve.

Sources "familiar with the process" have said the companies want answers after voluntarily agreeing to participate in the tests, with them now seeking details about how long the tests will take, and what will happen if the UK's brand new AI Safety Institute, or AISI, finds any faults, the Financial Times reported.

The AISI was established as part of the UK's push to become a leading global power in AI regulation, and companies, including Google, Deep-Mind, Meta, Microsoft, and OpenAI, signed voluntary commitments in November at the UK's AI Safety Summit to subject their software to the agency's tests and potentially to change things if the AISI found faults.

The FT said unnamed sources close to the companies have noted that the enterprises are not legally bound to change or delay product releases because of the AISI's tests, and have urged the agency to speed up its work.

Ian Hogarth, chair of the AISI, countered in a LinkedIn post that "companies agreed that governments should test their models before they are released: the AI Safety Institute is putting that into practice".

And a UK government spokesperson told the FT: "Testing of models is already under way, working closely with developers. We welcome ongoing access to the most capable AI models for pre-deployment testing — one of the key agreements companies signed up to at the AI Safety Summit."

The spokesperson said the AISI "will share findings with developers as appropriate" and that "where risks are found, we would expect them to take any relevant action ahead of launching".

The FT said the situation highlights the "limitations of relying on voluntary agreements to set the parameters of fast-paced tech development".

The UK government, meanwhile, said on Tuesday more about its longterm plans to regulate AI through "future binding requirements". It also said it would only consider the thorny issue of how to copyright AI breakthroughs after additional engagement with the industry.

Critics of AI technology have called for more regulation, pointing to its possible use in cyberattacks and its potential to help in developing bioweapons.

The US-based news website Politico said on Wednesday that the European Union and the United States now seem to have overtaken the UK as forerunners in the push to regulate AI technology.