Getting the facts straight on online misinformation
The growth in misleading rhetoric from US congressional candidates on topics such as election integrity has put renewed pressure on social media platforms ahead of November’s vote. And the perception that tech companies are doing little to fight misinformation raises questions about their democratic obligations and poses commercial risks. But, perhaps surprisingly, recent initiatives suggest that platforms may be able to channel partisan motivations to democratise moderation.
One explanation for platforms’ seemingly tepid response is the conflicting pressure companies face from critics. Seven out of 10 US adults — and most experts — see misinformation as a “major problem” and believe internet companies should do more to curb its spread. Yet prominent Republican politicians have called moderation “censorship”, and threatened to pass legislation curbing the ability of platforms to self-regulate. Regulation poses serious challenges for the business model of social media companies, as does the loss of users who are disillusioned by the sense that either misinformation or political bias is running rampant.
How can social media companies thread the needle of engaging in meaningful moderation while escaping accusations of partisan bias and censorship? One potential solution that platforms have begun to test is to democratise moderation through crowdsourced fact-checking. Instead of relying solely on professional fact-checkers and artificial intelligence algorithms, they are turning to their users to help pick up the slack.
But why should anyone trust the crowd to evaluate content in a reasonable manner? Research led by my colleague Jennifer Allen sheds light on when crowdsourced evaluations might be a good solution — and when they might not.
First, the good news. One scenario we studied was when laypeople are randomly assigned to rate specific content, and their judgments are combined. Our research has found that averaging the judgments of small, politically balanced crowds of laypeople matches the accuracy as assessed by experts, to the same extent as the experts match each other.
This might seem surprising, since the judgments of any individual layperson are not very reliable. But more than a century of research on the “wisdom of crowds” has shown how combining the responses of many non-experts can match or exceed expert judgments. Such a strategy has been employed, for example, by Facebook in its Community Review, which hired contractors without specific training to scale fact-checking.
However, results are more mixed when users can fact-check whatever content they choose. In early 2021, Twitter released a crowdsourced moderation programme called Birdwatch, in which regular users can flag tweets as misleading, and write free-response fact-checks that “add context”. Other members of the Birdwatch community can upvote or downvote these notes, to provide feedback about their quality. After aggregating the votes, Twitter highlights the most “helpful” notes and shows them to other Birdwatch users and beyond.
This story originally appeared on: Financial Times - Author:Tax Cognition