Researchers have already tested YouTube’s algorithms for political bias (2024)

Bias check —

More moderation associated with more hate speech and misinformation, not politics.

Zain Humayun -

Researchers have already tested YouTube’s algorithms for political bias (1)

In August 2018, President Donald Trump claimed that social media was "totally discriminating against Republican/Conservative voices." Not much was new about this: for years, conservatives have accused tech companies of political bias. Just last July, Senator Ted Cruz (R-Texas) asked the FTC to investigate the content moderation policies of tech companies like Google. A day after Google's vice president insisted that YouTube was apolitical, Cruz claimed that political bias on YouTube was "massive."

But the data doesn't back Cruz up—and it's been available for a while. While the actual policies and procedures for moderating content are often opaque, it is possible to look at the outcomes of moderation and determine if there's indication of bias there. And, last year, computer scientists decided to do exactly that.

Moderation

Motivated by the long-running argument in Washington, DC, computer scientists at Northeastern University decided to investigate political bias in YouTube's comment moderation. The team analyzed 84,068 comments on 258 YouTube videos. At first glance, the team found that comments on right-leaning videos seemed more heavily moderated than those on left-leaning ones. But when the researchers also accounted for factors such as the prevalence of hate speech and misinformation, they found no differences between comment moderation on right- and left-leaning videos.

"There is no political censorship," said Christo Wilson, one of the co-authors and associate professor at Northeastern University. "In fact, YouTube appears to just be enforcing their policies against hate speech, which is what they say they're doing." Wilson's collaborators on the paper were graduate students Shan Jiang and Ronald Robertson.

To check for political bias in the way comments were moderated, the team had to know whether a video was right- or left-leaning, whether it contained misinformation or hate speech, and which of its comments were moderated over time.

From fact-checking websites Snopes and PolitiFact, the scientists were able to get a set of YouTube videos that had been labelled true or false. Then, by scanning the comments on those videos twice, six months apart, they could tell which ones had been taken down. They also used natural language processing to identify hate speech in the comments.

To assign their YouTube videos left or right scores, the team made use of an unrelated set of voter records. They checked the voters' Twitter profiles to see which videos were shared by Democrats and Republicans and assigned partisanship scores accordingly.

Controls matter

The raw numbers "would seem to suggest that there is this sort of imbalance in terms of how the moderation is happening," Wilson said. "But then when you dig a little deeper, if you control for other factors like the presence of hate speech and misinformation, all of a sudden, that effect goes away, and there's an equal amount of moderation going on in the left and the right."

Kristina Lerman, a computer scientist at the University of Southern California, acknowledged that studies of bias were difficult because the same results could be caused by different factors, known in statistics as confounding variables. Right-leaning videos may simply have attracted stricter comment moderation because they got more dislikes or contained erroneous information or because the comments contained hate speech. Lerman said that Wilson's team had factored alternative possibilities into their analysis using a statistical method known as propensity score matching and that their analysis looked "sound."

Kevin Munger, a political scientist at Penn State University, said that, although such a study was important, it only represented a "snapshot." Munger said that it would be "much more useful" if the analysis could be repeated over a longer period of time.

In the paper, the authors acknowledged that their findings couldn't be generalized over time because "platform moderation policies are notoriously fickle." Wilson added that their findings couldn't be generalized to other platforms. "The big caveat here is we're just looking at YouTube," he said. "It would be great if there was more work on Facebook, and Instagram, and Snapchat, and whatever other platforms the kids are using these days."

Wilson also said that social media platforms were caught in a "fatal embrace" and that every decision they made to censor or allow content was bound to draw criticism from the other side of the political spectrum.

"We're so heavily polarized now—maybe no one will ever be happy," he said with a laugh.

Researchers have already tested YouTube’s algorithms for political bias (2024)
Top Articles
Latest Posts
Article information

Author: Margart Wisoky

Last Updated:

Views: 5680

Rating: 4.8 / 5 (58 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Margart Wisoky

Birthday: 1993-05-13

Address: 2113 Abernathy Knoll, New Tamerafurt, CT 66893-2169

Phone: +25815234346805

Job: Central Developer

Hobby: Machining, Pottery, Rafting, Cosplaying, Jogging, Taekwondo, Scouting

Introduction: My name is Margart Wisoky, I am a gorgeous, shiny, successful, beautiful, adventurous, excited, pleasant person who loves writing and wants to share my knowledge and understanding with you.