news

As disinformation spreads during UK riots, regulators are currently powerless to take action

Riot police officers push back anti-migration protesters outside on Aug. 4, 2024 in Rotherham, U.K.
Christopher Furlong | Getty Images
  • Even as disinformation on social media related to stabbings in the U.K. has led to real-world violence, Ofcom, Britain's online safety regulator, finds itself unable to take effective enforcement actions.
  • Peter Kyle, the U.K.'s technology minister, held conversations with social media firms including TikTok, Meta, Google and X over their handling of misinformation being spread during the riots.
  • New duties on tech firms requiring them by law to actively police their platforms for harmful content won't fully come into force until 2025.

LONDON — Ofcom, the U.K.'s media regulator, was chosen last year by the government as the regulator in charge of policing harmful and illegal content on the internet under strict new online safety regulations.

But even as online disinformation related to stabbings in the U.K. has led to real-world violence, Ofcom, Britain's online safety regulator finds itself unable to take effective enforcement actions.

Last week, a 17-year-old knifeman attacked several children attending a Taylor Swift-themed dance class in the English town of Southport in Merseyside.

Three girls were killed in the attack. Police subsequently identified the suspect as Axel Rudakubana.

Shortly after the attack, social media users were quick to falsely identify the perpetrator as an asylum seeker who arrived in the U.K. by boat in 2023.

On X, posts sharing the fake name of the perpetrator were actively shared and were viewed by millions.

That in turn helped spark far-right, anti-immigration protests, which have since descended into violence, with shops and mosques being attacked and bricks and petrol bombs being hurled.

Why can't Ofcom take action?

U.K. officials subsequently issued warnings to social media firms urging them to get tough on false information online.

Peter Kyle, the U.K.'s technology minister, held conversations with social media firms such as TikTok, Facebook parent company Meta, Google and X over their handling of misinformation being spread during the riots.

But Ofcom, the regulator tasked with taking action over failings to tackle misinformation and other harmful material online, is unable at this stage to take effective actions on the tech giants allowing harmful posts inciting the ongoing riots because not all the powers from the act have come into force.

New duties on social media platforms under the Online Safety Act requiring firms to actively identify, mitigate and manage the risks of harm from illegal and harmful content on their platforms have not yet taken effect.

Once the rules fully take effect, Ofcom would have the power to levy fines of as much as 10% of companies' global annual revenues for breaches, or even jail time for individual senior managers in cases where repeat breaches occur.

But until that happens, the watchdog is unable to penalize firms for online safety breaches.

Under the Online Safety Act, the sending of false information intended to cause non-trivial harm is considered a punishable criminal offense. That would likely include misinformation aiming to incite violence.

How has Ofcom responded?

An Ofcom spokesperson told CNBC Wednesday that it is moving quickly to implement the act so that it can be enforced as soon as possible, but new duties on tech firms requiring them by law to actively police their platforms for harmful content won't fully come into force until 2025.

Ofcom is still consulting on risk assessment guidance and codes of practice on illegal harms, which it says it needs to establish before it can effectively implement the measures of the Online Safety Act.

"We are speaking to relevant social media, gaming and messaging companies about their responsibilities as a matter of urgency," the Ofcom spokesperson said.

"Although platforms' new duties under the Online Safety Act do not come into force until the new year, they can act now — there is no need to wait for new laws to make their sites and apps safer for users."

Gill Whitehead, Ofcom's group director for online safety, echoed that statement in an open letter to social media companies Wednesday, which warned of the heightened risk of platforms being used to stir up hatred and violence amid recent acts of violence in the U.K.

"In a few months, new safety duties under the Online Safety Act will be in place, but you can act now – there is no need to wait to make your sites and apps safer for users," Whitehead said.

She added that, even though the regulator is working to ensure firms rid their platforms of illegal content, it still recognizes the "importance of protecting freedom of speech."

Ofcom says it plans to publish its final codes of practice and guidance on online harms in December 2024, after which platforms will have three months to conduct risk assessments for illegal content.

The codes will be subject to scrutiny from U.K. Parliament, and unless lawmakers object to the draft codes, the online safety duties on platforms will become enforceable shortly after that process concludes.

Provisions for protecting children from harmful content will come into force from spring 2025, while duties on the largest services will become enforceable from 2026.

Copyright CNBC
Contact Us