ikTok, Meta, Twitter, and YouTube All Have Different Plans for 2024 Election Misinformation

ikTok, Meta, Twitter, and YouTube All Have Different Plans for 2024 Election Misinformation

It’s never easy to separate political speech from disinformation in a democracy built upon the right of free speech, but the firehose of foreign and domestic weaponized bullshit that began in 2016, reached a fever pitch in the aftermath of the 2020 presidential election, and continued throughout the pandemic, saw social media companies at least giving it a shot.

Ahead of the 2024 election, however, many of the biggest players in the social media game are quietly (or loudly) reversing course on their previous moderation policies regarding misinformation and disinformation (the forming being false information spread mistakenly; the latter, maliciously). Perhaps the the advertising money and engagement that election year political speech brings are too tempting to pass up. Or maybe they’re just really committed to free expression.

Below I’ve laid out how five of the most influential social media platforms are walking the tightrope right now between providing a platform for free speech and allowing people to spread lies and conspiracy theories outright.

What is Twitter’s political misinformation policy?

Twitter’s policies regarding speech have changed drastically over the last few years. Before the October 2022 ascension of Elon Musk to CEO, Twitter was fairly locked down when it came to mis/disinformation. It banned political ads altogether in 2019. Policies against spreading lies about Covid and vaccines lead to the suspension of over 11,000 accounts during the early part of the pandemic. Controversial accounts like those of Kanye West and Donald Trump were banned even though they had millions of followers. Significantly, the service prevented the spread of links to a New York Post story about Hunter Biden’s laptop during the final weeks of the 2020 election, a move many considered a step too far.

Everything is different now. Among the first things Elon Musk did upon taking control of Twitter was announce a policy of “General Amnesty” for suspended users, overturning bans on thousands of accounts that promoted vaccine denial, general hate speech, bullying, and just about everything else. Twitter has partially reversed its 2019 ban on political advertising this year too, deciding to allow “cause-based” ads in the U.S. Musk does draw the line somewhere, though: Trump and Kanye may be free to tweet, but Alex Jones’s Twitter account remains suspended.

How does YouTube deal with political speech?

Anyone who has watched YouTube already knows the company has no problem accepting ads from politicians. But until recently, it prohibited content that called elections into question. In June, YouTube reversed its election integrity policy that pledged to prohibit “false claims that widespread fraud, errors, or glitches occurred in certain past elections to determine heads of government. Or, content that claims that the certified results of those elections were false.”

YouTube’s new policy acknowledges that removing this content curbed “some misinformation” but that it had “the unintended effect of curtailing political speech without meaningfully reducing the risk of violence or other real-world harm.” Going forward, YouTube says it will “stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past U.S. Presidential elections.” Yay.

How does TikTok censor political views?

TikTok is a frequent flashpoint for controversy, with concerns over the network’s connection to China going so far as to trigger Congressional hearings. Whether Americans’ data is safe from China is an open question, but TikTok has fairly strict policies regarding mis/disinformation. The company’s “integrity and authenticity policy” prohibits “inaccurate, misleading, or false content that may cause significant harm to individuals or society, regardless of intent. Significant harm includes physical, psychological, or societal harm, and property damage.”

To combat misinformation and disinformation of all kinds, TikTok says it employs a small army of fact-checkers from around the world; if they confirm content to be false, TikTok “may remove the video from our platform or make the video ineligible for recommendation into For You feeds.” TikTok does not allow “paid political promotion, political advertising, or fundraising by politicians and political parties (for themselves or others)” either.

What is Facebook’s policy on political ads and misinformation?

Speaking of political promotion, Meta’s Facebook, the world’s largest social media platform, is all over the place on the issue. After being identified as a primary source of foreign-funded disinformation in the run-up to the 2016 presidential election, Facebook threw up its hands and said “enough” after the polls closed in 2020. In an effort to combat “foreign interference, misinformation and voter suppression,” the company stopped accepting all political ads, an easy decision to announce after a presidential election. They have since instituted what looks like a “middle ground” policy around major U.S. elections: During the 2022 midterms, Facebook instituted a restriction period for “ads about social issues, elections or politics in the U.S.” between Nov. 1—Nov. 8, 2022 (election day). Whether the policy will stay in place for the 2024 elections remains to be seen.

How does Reddit moderate misleading political speech?

It’s difficult to imagine what Reddit would look like if all of the misinformation it houses were removed. Reddit has no top-down comprehensive moderation policy concerning misinformation, and that’s by design. Instead, individual users invent their own moderation policies for subreddits, with whatever rules they like. If a subreddit passes some unspoken line of egregiousness, actual Reddit employees step in and “quarantine” the offensive message board, i.e. remove it from search results. It could also be banned, but this is usually caused by violating anti-harassment policy as opposed to an anti lie-about-literally-anything policy.

For example, Reddit banned r/nonewnormal, a vaccination and Covid conspiracy subreddit in 2021, but not for lying or worsening an international pandemic. Instead, r/NoNewNormal was axed for “brigading” other subreddits. Reddit’s recent moves toward controlling how its unpaid moderators do their non-jobs may suggest a more heavy-handed, “we need advertising” approach is forthcoming at the site.

Reddit does allow political ads, but promises it “forbids deceptive, untrue, or misleading advertising, even among politicians. It also requires political ad buyers to “leave comments ‘on’ for (at least) the first 24 hours of any given campaign.”

Source Link