According to a new report from Avaaz, Facebook is still unable to identify and flag false and misleading posts on elections. The US-based nonprofit analyzed a cross section of electoral misinformation on Facebook ahead of the crucial Georgia Senate runoff election. Avaaz found that 60% of the false and misleading posts discovered on the platform reached thousands of voters with no fact-checking tags.

The report comes as research suggests Facebook is unable to contain the widespread spread of misinformation, disinformation and hate speech on its platform. In January, Seattle University associate professor Caitlin Carlson released the results of an experiment in which she and a colleague collected more than 300 posts that appeared to violate Facebook’s hate speech rules. (Only about half of the posts were ultimately removed.) According to the Washington Post and others, US President Donald Trump’s allies have received few penalties under Facebook’s rules. Former employees told the publication that Trump-facing accounts are safe from strict enforcement due to concerns about perceptions of anti-conservative bias.

Avaaz documented and analyzed 204 Facebook posts between November 4 and 23, promoting 12 different false election-related claims about Georgia, which were verified by PolitiFact, Snopes, Reuters, USA Today and other independent fact-checkers. The report showed that these misinformation items, promoting false allegations about senatorial candidates, the state’s election narrative, and alleged electoral fraud and intimidation, had a total of 643,406 interactions as of November 2020 and 112 had no factual information. Check the label.

In June, Facebook began flagging – but not reviewing – politicians’ posts. Some studies have shown that these labels reduce people’s tendency to exchange misinformation, and researchers at MIT have claimed, among other things, that posts without fact-check labels receive a higher level of authority, with many users assuming they are correct are. However, according to BuzzFeed News, Facebook’s internal data shows that labels on Trump’s misleading posts about the elections did little to slow their spread.

As part of its review, Avaaz found that 61 of the 204 posts identified as misleading or incorrect had a generic Facebook election information label but no fact-checking label or detailed corrections. Meanwhile, 82 of the posts had fact-checking labels and 61 posts had no label at all. Avaaz said 59 of the generic label posts should have been fact-checked because only two of them were from elected executives or campaigns that Facebook would consider exempt. Additionally, the nonprofit advises that a profile called Qu Ed, who shared misinformation about elections in Georgia with over 6,500 supporters, appears to be inauthentic and promotes QAnon content – a violation of the QAnon ban announced in early October -Content on Facebook.

The posts with a generic label had a total of 361,262 interactions, while the posts with a fact-check label had 269,971 interactions and the posts without labels had 12,173 interactions. As Avaaz notes, Facebook’s failure to vote could further undermine confidence in elections and affect voter turnout and behavior ahead of the December 14 early vote in Georgia.

Avaaz recommends Facebook correct the record for all users exposed to any of the identified posts. This is a move the company once considered, according to the New York Times, but declined on political grounds. Research has shown that retrospective corrections can reduce belief in disinformation by nearly 50% if done quickly.

Avaaz also urges Facebook to flag any variations of the same misinformation on its platforms and train its AI systems to detect “near duplicate” versions so problematic pages and groups can be downgraded. However, there is a limit to what AI can do, especially when it comes to content like memes. When Facebook launched the Hateful Memes dataset, a benchmark to evaluate the performance of hate speech removal models, the most accurate algorithm achieved an accuracy of 64.7%, while humans showed an accuracy of 85% in the dataset. A New York University study published in July estimated that Facebook’s AI systems make around 300,000 content moderation errors every day and problematic posts continue to slide through Facebook’s filters. In a Facebook group that was formed in November and quickly grew to nearly 400,000 people, members calling for a nationwide recount of the 2020 U.S. presidential election exchanged unsubstantiated allegations of alleged electoral fraud and the number of state votes every few seconds.

Technology challenges aside, according to Avaaz, there are inconsistent, unclear, and in some cases controversial, Facebook content moderation guidelines that need to be addressed. “Election officials, including the Georgia Secretary of State, receive threats of disinformation,” the organization wrote in its report. “Defending democracy and ensuring that voters make fact-based decisions about voting and are not deceived requires urgent implementation of this solution now.”