International Trade Today is a Warren News publication.
Business Model Key Issue

Implementation of Christchurch Call Said Problematic

It may be an understandable response to the New Zealand attacks, but the "Christchurch Call" has worrying aspects and could be difficult to implement, said digital rights activists and think tanks. There's no clear vision of what social media and video sharing platforms should be doing or planning to do to counter such situations and no effort to address tech companies' business model, they said. The Christchurch Call is a commitment by governments and tech companies to eliminate terrorist and violent extremist content online. Eighteen countries plus major social media platforms such as Google, Facebook, Twitter and Microsoft back the agreement, adopted May 15. The U.S. declined to sign (see 1905150047).

Sign up for a free preview to unlock the rest of this article

If your job depends on informed compliance, you need International Trade Today. Delivered every business day and available any time online, only International Trade Today helps you stay current on the increasingly complex international trade regulatory environment.

Civil society and public policy groups criticized elements. It appears a positive example of collaborative negotiation toward a self-governing regime, wrote American Enterprise Institute's Bronwyn Howell. While governments set aspirational goals, tech companies agreed to concrete, measurable steps on which it will be easier to hold them to account. This could make them agents of government, countering terrorism and violent extremism but also leading to more government control over the free press, Howell said May 20.

The call has good, not-so-good and ugly aspects, said Electronic Frontier Foundation. The ugliness lies in the call asking companies to take "transparent, specific measures" to prevent the upload of terrorist and violent extremist content and prevent its dissemination "in a manner consistent with human rights and fundamental freedoms," while upload filters are inherently inconsistent with such freedoms, EFF said. "We also have grave concerns about how 'terrorism' and 'violent extremism' are defined, by whom." Companies often use "blunt measures" to determine what constitutes terrorism, and signers Jordan and Spain have used anti-terror measures to silence speech.

Faulty content moderation "inadvertently captures and censors vital content, including activism, counter-speech, satire, and even evidence of war crimes," EFF, Syrian Archive and Witness reported Monday. Companies that signed "make big promises about automated content moderation in the news, [but] they elsewhere admit that the technology is not foolproof." Automated tools can make everything worse, since context is critical, said EFF Director-International Freedom of Expression Jillian York: "Marginalized people speaking out on tricky political and human rights issues are too often the ones who are silenced."

Given these and other issues, one key question is whether the pledge can be implemented with any accuracy.

It's a declaration of intent for governments, tech platforms and civil society, said European Digital Rights Executive Director Claire Fernandez in an interview Monday. New Zealand wisely decided not to regulate, but the initiative reflects the reluctance or inability to address the "elephant in the room" -- the business model of the companies, she said: As long as the issue of revenue from behavioral advertising remains unaddressed, all other attempts to tackle the problem will be "useless" and impossible to implement.

The pact is "motivated by three important principles," emailed Clara Hendrickson, Brookings Institution research analyst-governance studies: (1) A coordinated, international effort to rein in violent content online will be more effective than a response led by any one nation. (2) There must be balance between preserving freedom of expression online and protecting the public. (3) Governments and companies can work together to experiment with preserving an open, free internet while ensuring powerful technologies aren't abused. While the accord is symbolic, to be effective, signers should be evaluated on how well they're keeping their commitments, she said.

It's unclear what best practices platforms should use in addressing terrorism, Hendrickson said. There's an important debate over what constitutes harmful content and what kind of content platforms should remove, but reining in terrorist and violent extremist content, which the agreement exclusively focuses on, "should be the least controversial matter," she said. "Actually preventing the posting and dissemination of such content is more difficult." Facebook recently announced it will bar users who post such content from using its livestreaming service, which seems like a good supplement to fallible autodetection systems, she said. Yet there's little evidence that shuttering sites temporarily, such as the Sri Lankan government did after the Easter Sunday bombings, works, because it prevents people from reaching loved ones or accessing useful information via social media, she added.

EDRi is less interested in best practices than in ensuring companies and governments safeguard human rights, users get notice and can access remedies, and there's transparency about what content is removed, said Fernandez. The size and power of social media and video-sharing platforms should be questioned more closely than their technical solutions to the problem, she said.