The British government said on Monday that it will explore making social media executives personally liable for harmful content published on their platforms, in a raft of new online safety proposals.
The plans unveiled in a policy paper, which also include creating an independent regulator, aim to tackle all kinds of harmful content – from encouraging violence and suicide to spreading disinformation and cyber bullying.
The moves followed steps taken by Australia and Singapore to counter fake news and getting social media companies to play their part to stop the spread of harmful content online.
British Prime Minister Theresa May warned tech companies they had “not done enough” to protect users, and that her government intended to put “a legal duty of care” on the firms “to keep people safe”.
“For too long, these companies have not done enough to protect users, especially children and young people, from harmful content,” she said in a statement. “That is not good enough, and it is time to do things differently. Online companies must start taking responsibility for their platforms, and help restore public trust in this technology.”
The new laws envisaged will apply to any company that allows users to share or discover user generated content or interact with one another online.
That will include file hosting sites and chat forums as well as the better known social media platforms, messaging services and search engines.
Firms could face tough penalties for failing to meet the standards.