International Trade Today is a Warren News publication.
'Voluntary Censorship'?

Platforms Targeted as EU, UK Toughen Stance Against Illegal Content Online

Internet platforms and governments must raise their game against illegal content or face regulation, the European Commission said Thursday. It recommended operational measures companies and administrations take before it decides whether to propose legislation. The nonbinding recommendation builds on a September EC statement on tackling illegal content online and applies to all forms of illegal content. Digital rights activists, tech companies and ISPs slammed the action. The U.K., meanwhile, is also pressing platforms to do more.

Sign up for a free preview to unlock the rest of this article

If your job depends on informed compliance, you need International Trade Today. Delivered every business day and available any time online, only International Trade Today helps you stay current on the increasingly complex international trade regulatory environment.

Platforms "need to redouble their efforts to take illegal content off the web more quickly and efficiently," the EC said. Voluntary industry measures have achieved results, but there's "significant scope for more effective action, particularly on the most urgent issue of terrorist content," it said. It recommends: (1) Clearer "notice and action" procedures for alerting about illegal content. (2) Better tools and proactive technologies for finding and removing such content. (3) Stronger safeguards to protect fundamental rights, including human oversight and verification. (4) Special attention to small companies via cooperation and sharing best practices and technologies. (5) Closer cooperation with authorities.

All companies should remove terrorist content within one hour of being notified, the EC said. They should have measures such as automated detection to swiftly remove or disable such content and prevent it from reappearing after. There should be fast-track procedures for processing referrals, and EU members should report, preferably every three months, on referrals, their follow-up and what governments are doing to cooperate with companies to curb terrorist content online.

"I've always rejected the idea of blanket regulation of platforms online," said Andrus Ansip, EC vice president in charge of the digital single market. Platforms are removing much more illegal content than they were because they know it's good for business, but they need more clarity, he told journalists. The recommendation won't change the intermediary liability regime under the EU e-commerce directive, he and other commissioners said. The recommendations are a "clear signal to internet companies" that while the voluntary approach is still favored, they must do much more, said Security Union Commissioner Julian King. One way or the other, the EC intends to meet its objectives, he said.

Facebook said it shares the EC's goal to fight all forms of illegal content. "There is no place for hate speech or content that promotes violence or terrorism on Facebook," it said. The latest figures show that the platform has already made good progress toward removing various forms of illegal content, it said. Google and Twitter didn't comment.

The EC "is pushing 'voluntary' censorship to internet giants to avoid legislation that would be subject to democratic scrutiny and judicial challenge," said European Digital Rights Executive Director Joe McNamee. The one-hour turnaround for removing terrorist material "does not take due account of all actual constraints linked to content removal and will strongly incentivise hosting services providers to simply take down all reported content," said the Computer & Communications Industry Association. Forcing companies to use automated measures to remove content from across the entire internet may lead to widespread censorship, CCIA said. The European ISP Association said the increased burden on intermediaries "is not only troubling due to the lack of court oversight, it also serves to further reinforce the worrying trend of the privatisation of law enforcement online."

Separately, the European Parliament Thursday condemned normalization of hate speech sponsored by authorities, political parties and politicians. Lawmakers said they're worried about the "alarming increase" in hatred, hate speech and extremism online and offline driven by social networks and the anonymity offered by different media platforms.

U.K.

The U.K. has several ongoing initiatives aimed at fighting online terrorist content and fake news. Prime Minister Theresa May announced plans Feb. 6 to review laws to ensure "that what is illegal offline is illegal online," assigning the Law Commission the task of determining whether current measures on offensive online communications align with technology. The government will also float a new social media code of practice this year that sets out its minimum expectations on social media companies, she said.

New technology said to automatically detect terrorist content on platforms is being pushed by U.K. Home Secretary Amber Rudd. Tests show the tool, developed by the Home Office and ASI Data Science, uses advanced machine learning to analyze the audio and visuals of a video to determine if it could be Islamic State (IS) propaganda, she said Feb. 13.

"This is a useful initiative which could help smaller tech companies keep IS-related material off their platforms," but there's "no magic wand that tech companies or the Government can wave to eradicate online extremist content," emailed techUK Deputy CEO Antony Walker. Terrorist groups are highly sophisticated in gaming defenses tech companies develop, he said: The tool is "only effective for IS-related content rather than extremist content generally which is a much broader and more difficult challenge." He said "the battle against online extremist[s] will not be won overnight."

Rudd's plan is "all show," emailed Ross Anderson, security engineering professor at the University of Cambridge Computer Lab. He's "extremely sceptical that Mrs Rudd's tool will stop jihadis sending objectionable videos; these are just spam after all, and spammers are adaptive."

A U.K. parliamentary committee is focused on forcing platforms to deal with fake news. The Commons Digital, Culture, Media and Sport Committee held a hearing in Washington last month with Google, Twitter, YouTube and Facebook. Asked whether panel members are now reassured U.S. social media platforms will actively take steps to address the problems raised, Chairman Damian Collins, MP, emailed the panel was "pleased to be able to speak directly to global policy leads" from the companies, but "it seemed that once again these companies aren't matching their promises with actions." Social media platform representatives told lawmakers they're taking proliferation of fake news and rogue content seriously, "but then went on to admit that they allocate very little relative resource to tackling the problem," Collins said. The inquiry will result in recommendations on solutions to tackling the problems to which the government must respond, he said.