news

Britain's Labour faces an uphill battle as it looks to rein in AI's key players

An Internet user checks ChatGPT on his mobile phone, Suqian, Jiangsu province, China, April 26, 2023.
Future Publishing | Future Publishing | Getty Images
  • The U.K. government this week said it would explore "appropriate legislation" for the world's most powerful artificial intelligence models." 
  • But it stopped short of giving away details of an actual AI bill, which many tech industry executives and commentators had been expecting.
  • The new Labour government faces a delicate balancing act of forming rules that are strict enough while also allowing for innovation.

LONDON — Britain is set to introduce its first-ever law for artificial intelligence — but Prime Minister Keir Starmer's new Labour government faces a delicate balancing act of forming rules that are strict enough while also allowing for innovation.

In a speech delivered by King Charles III on behalf of Starmer's administration, the government said Wednesday that it would "seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models." 

But the speech refrained from mentioning an actual AI bill, which many tech industry executives and commentators had been expecting. 

In the European Union, authorities have introduced a sweeping law known as the AI Act which subjects companies developing and using artificial intelligence to much tighter restrictions. 

Many tech firms — both big and small — are hopeful that the U.K. doesn't go the same way in applying rules that they deem to be too heavy-handed. 

What a UK AI Bill could look like

It's expected that Labour will still introduce formal rules for AI as the party laid out in its election manifesto.  

Starmer's government pledged to introduce "binding regulation on the handful of companies developing the most powerful AI models," and legislation that bans sexually explicit deepfakes.

By taking aim at the most powerful AI models, Labour would impose tighter restrictions on companies such as OpenAI, Microsoft, Google, Amazon, and AI startups including Anthropic, Cohere and Mistral. 

"The largest AI players will likely face more scrutiny than before," Matt Calkins, CEO of software firm Appian, told CNBC. 

"What we need is an enabling environment for wide-ranging innovation that is governed by a clear regulatory framework that provides fair opportunities and transparency for everyone." 

Lewis Liu, head of AI at contract management software firm Sirion, warned the government should avoid using a "broad stroke hammer approach to regulate every single use case."

Use cases such as clinical diagnosis — which involve sensitive medical data — shouldn't be thrown into the same bucket as things like enterprise software, he said.

"The U.K. has the opportunity to get this nuance right to the huge benefit of its tech sector," Liu told CNBC. However, he added he's seen "positive indications" about Labour's plans for AI so far.

Legislation on AI would mark a contrast with Starmer's predecessor. Under ex-PM Rishi Sunak, the government had opted for a light-touch approach to AI, seeking instead to apply existing rules to the technology. 

The previous Conservative government said in a February policy paper that introducing binding measures too soon could "fail to effectively address risks, quickly become out of date, or stifle innovation." 

In February, the U.K.'s new technology minister, Peter Kyle said Labour would make it compulsory on a statutory basis for companies to share test data on the safety of their AI models with government. 

"We would compel by law, those test data results to be released to the government," Kyle, then shadow tech minister, said in an interview with the BBC at the time.

Sunak's government had secured agreements from tech firms to share safety testing information with the AI Safety Institute, a state-backed body testing advanced AI systems. But this was only done on a voluntary basis.

In a statement, a spokesperson for the U.K.'s Department for Science, Innovation and Technology told CNBC that the government would look to "consult on the details" before drawing up legislation for AI.

"As set out in our manifesto, supporting the AI sector's development while maintaining strong safeguards and ensuring all of the public benefit from these technological advances is a key commitment for this government," the spokesperson said.

The risk of quashing innovation

The U.K. government wants to avoid pushing too hard with AI rules to the extent that it ends up hampering innovation. Labour also flagged in its manifesto that it wants to "support diverse business models which bring innovation and new products to the market."  

Any regulation would need to be "nuanced," assigning responsibilities "accordingly," Zahra Bahrololoumi, CEO of U.K. and Ireland at Salesforce, told CNBC, adding that she welcomes the governments call for "appropriate legislation." 

Matthew Houlihan, senior director of government affairs at Cisco, said any AI rules would need to be "centered on a thoughtful, risk-based approach." 

Other proposals that have already been put forward by U.K. politicians offer some insight as to what might be included in Labour's AI Bill.

British lawmaker Chris Holmes, a Conservative backbencher who sits in the upper house of parliament, last year introduced a bill proposing to regulate AI. The bill passed its third reading in May, kicking it over to the lower house of Parliament.

Holmes' law would have less of a chance at succeeding than one put forward by the government. However, it offers some ideas for how Labour could look to form its own AI legislation. 

Holmes' proposed bill includes the suggestion for the creation of a centralized AI authority, which would oversee the enforcement of rules for the technology.  

Companies would have to supply the AI Authority with third-party data and intellectual property used in the training of their models and ensure that any such data and IP is used with consent from the original source. 

This echoes, in some ways, the EU's AI Office, which is responsible for supervising the development of AI advanced models. 

Another proposal from Holmes is for firms to appoint individual AI officers, which would be tasked with ensuring safe, ethical and unbiased uses of AI by businesses, as well as making sure data used in any AI technology is unbiased. 

How it could compare to other regulators

Based on what Labour has promised so far, any such law will inevitably be "a very long way from the far-reaching scope of the EU AI Act," Matthew Holman, partner at law firm Cripps, told CNBC.

Holman added that rather than require overbearing disclosures from AI model makers, the U.K. is more likely to find a "middle ground". For example, the government could require AI firms to share what they're working on with closed-door sessions of the AI Safety Institute, but without revealing trade secrets or source code. 

Tech minister Kyle said previously at London Tech Week that Labour wouldn't pass a law as stringent as the AI Act because it doesn't want to hinder innovation or deter investment from large AI developers. 

Even so, a U.K. AI law would be a step above the U.S., which currently doesn't have federal AI legislation of any kind. In China, meanwhile, regulation is stricter than both the EU, and any legislation the U.K. is likely to put forward.  

Last year, Chinese regulators finalized rules governing generative AI aimed at stamping out illegal content and boosting security protections. 

Sirion's Liu said one thing he's hoping the government doesn't do, is restrict open-source AI models. "It's essential that the U.K.'s new AI regulations don't stifle open source or fall into the trap of regulatory capture," he told CNBC.

"There's a massive difference between the harm created by a large LLM from the likes of OpenAI and specific tailored open source models used by a start up to solve a specific problem."

Herman Narula, CEO of metaverse venture builder Improbable, agreed that restricting open-source AI innovation would be a bad idea. "New government action is needed but this action must focus on creating a viable world for open-source AI companies, which are necessary to prevent monopolies," Narula told CNBC.

Copyright CNBC
Contact Us