Skip to content

Meta's move away from fact checking may allow wider spread of misinformation: experts

b08e863040bb444c856929027796cea1498538b7049f7808d162efa26240ae62
The Instagram app is shown on an iPhone in Toronto on March 19, 2018. THE CANADIAN PRESS/Graeme Roy

TORONTO — Experts are worried the community notes program Meta will replace its current fact checking system with has several pitfalls that could allow misinformation to spread even further.

They say the Facebook, Instagram and Threads owner should rethink the move Meta founder Mark Zuckerberg announced Tuesday because the community notes model — a system that relies on users to append false posts — hasn't managed to quell harmful content as well as human fact checkers.

"I find this very worrying," said Kaitlynn Mendes, a sociology professor at Western University and the Canada Research Chair in inequality and gender, in an email.

"Reducing content moderators is going to increase the amount of harmful, hateful, violent, racist, sexist, homophobic and transphobic content out there."

In announcing the move to community notes, Zuckerberg said he was guiding his company "back to our roots around free expression" and away from fact checkers that "have just been too politically biased and have destroyed more trust than they have created, especially in the U.S."

"What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas, and it's gone too far," he said.

"I want to make sure that people can share their beliefs and experiences on our platforms."

The community notes system he sees as a solution will make its Meta debut in the U.S.

Asked what Zuckerberg's announcement means for Meta's Canadian fact checking efforts and any staff devoted to the program in the country, spokesperson Julia Perreira said the company will continue to improve the community notes model launching in the U.S. over the course of the year before its expansion to other countries.

"Building a community will take time," she said in an email. "There are no changes in other countries at this time."

The community notes system Zuckerberg plans to use is being modelled after a similar program implemented by X, formerly known as Twitter.

The troubles many have with community notes lie in how the system depends on platform users spotting potential misinformation and appending it with a note describing why it's wrong, said Brett Caraway, a professor of media economics at the University of Toronto.

"Numerous studies have shown that community notes has failed to identify viral misinformation and is uneven in its application," he wrote in an email.

Once users append a post using community notes, others have the ability to vote on whether they agree with the note, said Richard Lachman, the associate professor at Toronto Metropolitan University's Radio and Television Arts School of Media.

This makes the system prone to brigading — when people band together to boost a message, in this case by voting in tandem on notes whether their view is grounded in fact or not.

"That's a definite a concern," he said.

Lachman also points out that community notes are "putting the wisdom of the crowd against expert knowledge."

For example, he imagines medical experts are better to weigh in on how truthful posts about health are than the average user.

Some platforms are set up to control for this behaviour. They only boost the reach of posts with community notes when people from multiple niches of the political spectrum or fields all agree on the note.

But because it takes a while to garner enough votes from users, notes that set the record straight often don't happen quick enough, Lachman said.

"Most engagements are within one day. After two days, nobody's going to read that again," Lachman said. "If it's already gone viral, you've missed the boat."

Aside from rolling out community notes, Zuckerberg said he would also "get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse".

Content filters will be focused on "high-severity violations," which he did not define, while lower-severity violations will fall to users to report before Meta takes action.

"The reality is that this is a trade-off," Zuckerberg said. "It means we're going to catch less bad stuff, but we'll also reduce the number of innocent people's posts and accounts that we accidentally take down."

The Meta policy in conjunction with a lack of legislation requiring platforms to police harmful content "basically means that social media companies can promote whatever kind of content they want with no recourse for the harms they cause," Mendes said.

"There is also no oversight or ways to regulate them," she said. "This is very worrying indeed."

This report by The Canadian Press was first published Jan. 7, 2024.

Tara Deschamps, The Canadian Press


Looking for National Business News?

VillageReport.ca viewed on a mobile phone

Check out Village Report - the news that matters most to Canada, updated throughout the day.  Or, subscribe to Village Report's free newsletter: a compilation of the news you need to know, sent to your inbox at 6AM.

Subscribe