Facebook is still a pretty central component to many people’s social lives. Beyond simply helping friends connect and share photos of their vacations, the site also functions as many peoples’ primary news source. However, given that the platform is interested primarily in selling user preference data to generate revenue, they have a vested interest in keeping people engaged.
This engagement leads to some rather unsettling practices. For instance, when false information is uploaded to the platform, it often stays up.
People worry that Facebook’s lack of regulation on the spread of misinformation is itself dangerous. This is a sensitive topic in light of Twitter recently flagging President Donald Trump’s tweets for being misleading.
Speaking on the issue, CEO Mark Zuckerberg told Andrew Sorkin, “I don’t think that Facebook or internet platforms in general should be arbiters of truth. Political speech is one of the most sensitive parts in a democracy, and people should be able to see what politicians say.”
Many pundits believe that Facebook plays a role in radicalizing people. The Facebook newsfeed algorithm shows each individual content that will specifically engage them, according to their online history. If a piece of content will make that individual comment, share, or otherwise spend time on the platform, Facebook floats it to the top of the feed.
This can cause a feedback loop for users where they keep seeing the same types of things over and over. In essence, the feed becomes an echo chamber. Online, people with fringe beliefs can more easily infiltrate civil discussions and push things to extremes. Politics can become gamesmanship, where users are rooting for their “team” to beat the other “team.”
While Facebook does pay independent fact-checkers to review potentially harmful content on the site, Zuckerberg told Sorkin their job is to “catch the worst of the worst stuff.”
He continues, “The point of that program isn’t to try to parse words on, ‘Is something slightly true or false’…There are clear lines that map to specific harms and damage that can be done where we take down the content. We try to be more on the side of giving people a voice and free expression.”
Zuckerberg goes onto say that information leading to voter suppression is unacceptable. However, political ads that contain misinformation are allowed, he announced in October.
Critics of the site argue that Facebook is using politically-charged content to drive a wedge between groups in order to increase web traffic. That increased traffic gets Facebook more advertisement revenue from things like “boy scouts fund lawyer,” or whatever other clickbait wants to pay for ad space. This process leads to profit, so Facebook has no reason to change it.
Facebook’s attitude toward this matter has not been without consequences. Recently, pressure has built against the social media giant. People are calling for them to do something about the rampant misinformation and divisive content on the site.
Major sponsors like Ben and Jerry’s, Patagonia, REI and North Face have pulled their funding away from Facebook.
Now, the site faces down an effort called “Stop Hate for Profit,” which aims to curtail the spread of misinformation online. Civil rights activists started the group in the wake of the death of George Floyd. Floyd’s death at the hands of Minneapolis police sparked a huge surge in the Black Lives Matter movement.
“Stop Hate for Profit” seeks to cut off Facebook’s ad revenue until they address the divisive content on their site. By hitting the social media site where it hurts, they hope to enact real change.