LONDON (AP) — The U.K. unveiled plans on Monday to vastly increase government oversight of social media companies, with a first of its kind watchdog that could fine executives or even ban companies if they fail to block content such as terrorist propaganda or images of child abuse.
As concerns mount globally over how to monitor internet material without stifling free speech, the British proposal reflects a push by some countries – particularly in Europe but also Australia and New Zealand – to give regulators more power.
The British plans would create a statutory “duty of care” for social media companies such as Facebook and Twitter to protect people who use their sites. The plan, which includes an independent regulator funded by a levy on internet companies, will be open for public comment for three months before the government publishes draft legislation.
“No one in the world has done this before, and it’s important that we get it right,” Culture Secretary Jeremy Wright told the BBC.
While the United States has largely relied on market forces to regulate content in a country where free speech is revered, governments in Europe have signaled they are willing to take on the tech companies to block harmful content and prevent extremists from using the internet to fan the flames of hatred.
Britain will consider imposing financial penalties similar to those in the European Union’s General Data Protection Regulation, which permit fines of up to 4% of a company’s annual worldwide revenue, Wright said. In extreme cases, the government may also seek the power to fine individual company directors and prevent companies from operating in the U.K.
Criticism of social media sites has grown amid concerns that extremists like the so-called Islamic State group or far-right political groups are using them to recruit young people, pedophiles are using the technology to groom victims and young people are sharing dangerous information about self-harm and suicide. Australian last week made it a crime for social media platforms not to quickly remove “abhorrent violent material.” The crime would be punishable by three years in prison and a fine of 10.5 million Australian dollars ($7.5 million), or 10% of the platform’s annual turnover, whichever is larger.
After the March 15 mosque shootings that killed 50 and wounded of 50 more, New Zealand’s Privacy Commissioner wants his country to follow Australia’s lead.
European Union lawmakers are set to vote later Monday on a legislative proposal requiring internet companies to remove terrorist content within one hour of being notified by authorities, or face penalties worth up to 4 percent of revenue if they don’t comply.
The bill has been controversial, with some lawmakers and digital rights groups criticizing the one-hour rule. They say it places a much bigger burden on smaller internet companies than on tech giants like Facebook and Google, which have greater resources.
British Home Secretary Sajid Javid, whose department collaborated on the U.K. proposal unveiled Monday, criticized tech firms for failing to act despite repeated calls for action against harmful content.
“That is why we are forcing these firms to clean up their act once and for all,” Javid said.
Critics say the end result could be that Google and Facebook end up becoming the web’s censors. Others suggested the rules could stifle innovation and strengthen the dominance of technology giants because smaller firms won’t have the money to comply with such regulation.
“We worry that this attempt at controlling the Internet will entrench big tech players, stymie innovation, and lead to press censorship through the back door,” the London-based Adam Smith Institute, a free-market think tank, said in a statement.
As governments press to have the tech giants take on moral accountability, the challenge for the companies will be to translate that idea into the software, said Mark Skilton, a professor of practice at Warwick Business School. Politicians and technical experts need to work on the “shared problem” of providing guidance and control that is not excessively intrusive, he said.
“Issuing large fines and hitting companies with bigger legal threats is taking a 20th century bullwhip approach to a problem that requires a nuanced solution,” he said. “It needs machine learning tools to manage the 21st century problems of the internet, combined with the courage and foresight to establish independent frameworks that preserve the freedoms societies enjoy in the physical world, as well as the online one.”
Facebook’s U.K. head of public policy, Rebecca Stimson, said the goal of the new rules should be to protect society while also supporting innovation and freedom of speech.
“These are complex issues to get right and we look forward to working with the government and Parliament to ensure new regulations are effective,” she said.
Wright insisted the regulator would be expected to take account of freedom of speech while balancing against preventing harm.
“What we’re talking about here is user-generated content, what people put online, and companies that facilitate access to that kind of material,” he said. “So this is not about journalism. This is about an unregulated space that we need to control better to keep people safer.”