International Trade Today is a Warren News publication.
SCOTUS to Choose NetChoice

Platform Content Moderation Debated at TPI's Aspen Forum

ASPEN, Colo. -- The level of content moderation that online platforms exercise -- and how much leeway they should have to moderate -- was the subject of dispute in a panel Tuesday at the Technology Policy Institute (TPI) Aspen Forum. Odds are good the U.S. Supreme Court will consider NetChoice's challenge of Florida and Texas social media content moderation laws in its coming session since there's a Circuit Court split on the issue, said Ashkhen Kazaryan, Stand Together senior fellow. Laura Bisesto, Nextdoor global head-policy, privacy and compliance, said she hopes the Texas and Florida laws are enjoined, but there's also a problem of state-by-state approaches to legislating moderation rules and Congress instead should take it up. Nextdoor is a NetChoice member.

Sign up for a free preview to unlock the rest of this article

If your job depends on informed compliance, you need International Trade Today. Delivered every business day and available any time online, only International Trade Today helps you stay current on the increasingly complex international trade regulatory environment.

Companies should be able to decide what moderation approach they want to take, Bisesto said. Disagreeing, Facebook Oversight Board member Julie Owono said rules and principles to protect users' safety should be universal. Forthcoming national elections in the U.S. and India will likely reignite conversations and debate on content moderation, said Owono, who's also Internet Without Borders executive director.

Stanford health policy professor Jay Bhattacharya, a plaintiff in a Missouri vs. Biden suit before the Fifth U.S. Circuit Court of Appeals on federal agencies' pushing online platforms to remove some pandemic-related content, said discovery in the case revealed "widespread censorship effort" by the federal government against his advocacy against COVID-19 lockdowns. "Huge numbers of people now distrust public health" because of collusion between government and Big Tech, he said.

Objective policies for tackling misinformation such as use of labels or removing content aren't possible, Bhattacharya said. The idea that platforms can solve scientific debates and controversies as they're happening "is insane," he said. Owono said transparency is most important to a moderation policy. One might disagree with a moderation decision, but making public details such as who was consulted in the fact-checking can restore trust, she said.

Nextdoor's content moderation policy is rooted in local communities and aimed at reflecting standards of those communities, which might differ from place to place, Bisesto said. One universal standard the platform enforces is that it doesn't allow discussions of national issues, as those aren't locally focused and "typically turn uncivil very quickly," he said.

The challenge posed to Google's content moderation efforts when dealing with potentially harmful content created by generative AI won't be along policy lines but in detection, said David Graff, Google vice president-trust and safety. A lot of energy and effort is being put toward detecting whether that content, such as fabricated photos or videos, came from generative AI, he said.

Scale is a must-have for effective content moderation, said Graff and Monika Bickert, Meta vice president-content policy. Larger companies have the resources to develop tools and operate across multiple languages and thus "have to bring along the smaller companies," Bickert said. Giving smaller operators technical tools and helping with language allows them to have a nuanced approach, which can protect expression since those smaller operators can be prone to acquiescing to a complaint about content and removing it without any review, she said. There are 40,000 people at Meta whose jobs are trust and safety in some way, she said.