Social Media Gets ‘Last Chance’ to Curb Illegal Content Under UK’s New Internet Safety Law
Will Tech Giants Finally Clean Up Their Platforms?
Social media platforms have just been handed a final warning: start tackling illegal content on your services, or face heavy financial penalties. Under the Online Safety Act (OSA), these companies must begin evaluating if their platforms expose users to illegal material by 16 March 2025. Failure to do so could result in fines of up to 10% of their global turnover, as well as stricter regulatory actions.
What Does the OSA Demand?
The UK regulator, Ofcom, has published its final codes of practice, detailing exactly how online firms should combat illegal content. Platforms now have three months to conduct risk assessments. This means identifying if and where content related to child sexual abuse, coercive behaviour, extreme sexual violence, and material promoting or facilitating suicide and self-harm could appear.
If the platforms don’t comply, Ofcom will step in. As Ofcom chief Dame Melanie Dawes told the BBC:
“I’m asking the industry now to get moving, and if they don’t they will be hearing from us with enforcement action from March.”
Tough Talk, but Critics Remain Unimpressed
However, not everyone is convinced that the OSA goes far enough. Charities like the Molly Rose Foundation and NSPCC argue the new rules don’t fully address a wide range of harms to children, especially dealing with suicide and self-harm content. They say regulators need to do more, and faster, to protect young users from life-threatening material.
Andy Burrows, chief executive of the Molly Rose Foundation, expressed astonishment at what he sees as a lack of targeted measures to tackle the worst kinds of self-harm content. Meanwhile, the NSPCC voiced concerns that some of the largest platforms may still not be required to remove certain egregious forms of illegal content, potentially leaving private messaging channels unchecked.
Industry in the Hot Seat
Big tech firms have been under pressure for years, rolling out safety measures to protect younger users. Many now restrict the ability of adults to message children who don’t follow them, and some have introduced new anti-sextortion features, like blocking screenshots in direct messages.
The OSA demands more. Platforms must:
- Implement hash-matching technology to detect child sexual abuse material.
- Stop suggesting adult accounts to children.
- Warn young users of the risks associated with sharing personal information.
And that’s just the start. The codes still need final approval by Parliament, but given the urgency, Ofcom expects full compliance from March.
A Watershed Moment or More Delays?
Critics claim Ofcom is moving at a “snail’s pace,” and the lack of immediate clarity on some aspects of the code—especially concerning private messages—has raised questions about how quickly meaningful change will happen.
But the Technology Secretary, Peter Kyle, insists the OSA is a “fundamental reset” in how society expects tech companies to behave. He says he will be watching closely to ensure these firms deliver on their promises.
With the OSA’s enforcement date drawing near, tech giants now face a decisive test: Will they adapt swiftly and robustly to protect users, or risk hefty penalties and further reputational damage?
