Facebook & TikTok are approving misinformation about voting in American midterms
A new report from human rights watchdog Global Witness and the Cybersecurity for Democracy Team (C4D) at New York University experimented with 20 advertisements all creating misinformation to Facebook, TikTok, and Youtube. The results of this experiment have been making headlines as the advertisements blatantly portrayed anti-voting sentiments, things like the election results would be hacked or the outcome was pre-decided. The only social media sites that accepted this false information was TikTok, accepting all, and Facebook, accepting some. Youtube being the most advanced and showcasing that there can be ways of filtering out misinformation from the masses but we need to be seeing this in TikTok as it has more pull with countless more viewers.
In response to the report, a spokesperson for Facebook-parent Meta said the tests “were based on a very small sample of ads, and are not representative given the number of political ads we review daily across the world.” The spokesperson added: “Our ads review process has several layers of analysis and detection, both before and after an ad goes live.”
A TikTok spokesperson said the platform “is a place for authentic and entertaining content which is why we prohibit and remove election misinformation and paid political advertising from our platform. We value feedback from NGOs, academics, and other experts which helps us continually strengthen our processes and policies.”
Google said it has “developed extensive measures to tackle misinformation on our platforms, including false claims about elections and voting procedures.” The company added: “We know how important it is to protect our users from this type of abuse – particularly ahead of major elections like those in the United States and Brazil – and we continue to invest in and improve our enforcement systems to better detect and remove this content.”
TikTok, whose influence and scrutiny in US politics has grown in recent election cycles, launched an Elections Center in August to “connect people who engage with election content to authoritative information,” including guidance on where and how to vote, and added labels to clearly identify content related to the midterm elections, according to a company blog post.
Meta said in September that its midterm plan would include removing false claims as to who can vote and how, as well as calls for violence linked to an election. But Meta stopped short of banning claims of rigged or fraudulent elections, and the company told The Washington Post those types of claims will not be removed for any content involving the 2020 election. Looking forward, Meta has banned US ads that “call into question the legitimacy of an upcoming or ongoing election,” including the midterms, according to company policy.
According to the researchers, the only ad they submitted that TikTok rejected contained claims that voters had to have received a Covid-19 vaccination in order to vote. Facebook, on the other hand, accepted that submission. (Part of this article is credited to CNN’s reporter Clare Duffy here)