Clem Onojeghuo / UnsplashWhere do we draw the line when it comes to free speech? Facebook has been repeatedly faced with this question whenever users, publishers, or advertisers share content that is considered controversial, harmful, or untrue. These days, it’s challenging to determine whether fake news should be protected by the First Amendment. In a recent article, Richard Allan, Facebook’s vice president of policy, talks about the distinction between harmful content and content that is considered free speech. He also explains how Facebook determines whether to remove, block, or demote a post to the bottom of News Feed.
Although Facebook is not intended to be a government of sorts, it has found itself at the center of many government-related topics. People often use the platform to discuss national and political subjects, as well as to question the powers that be, which is the very essence of the human right to openly express opinions and values.
“Facebook is not a government, but it is a platform for voices around the world,” Allan says in the article. “We moderate content shared by billions of people, and we do so in a way that gives free expression maximum possible range. But there are critical exceptions: we do not, for example, allow content that could physically or financially endanger people, that intimidates people through hateful language, or that aims to profit by tricking people using Facebook.”
Allan says Facebook is part of a global initiative that helps guide the social media company in establishing human rights principles for its platforms. The global initiative keeps Facebook in check to prevent smothering people’s voices. Additionally, Allan says Facebook refers to Article 19 of the International Covenant on Civil and Political Rights (ICCPR) to determine which cases require restrictions on free expression. Specifically, the ICCPR maintains that restrictions are permitted only when they are lawful and necessary to preserve the respect or reputation of others, as well as “for the protection of national security or of the public order, or of public health or morals.”
While Facebook takes cases that require restrictions of free speech very seriously, Allan says the platform leans in favor of freedom of speech. “Whether it’s a peaceful protest in the streets, an op-ed in a newspaper, or a post on social media, free expression is key to a thriving society,” says Allan. He adds, “It’s core to both who we are and why we exist.”
It should be understood that Facebook users have the right to make false statements on the platform. However, this is where things get a little cloudy in terms of what is allowed on Facebook and what isn’t. Shouldn’t untrue statements be considered fake news? Allan explains that there are some instances in which users may share content that is false without breaching any of the platform’s rules. In these cases, Facebook doesn’t block or delete the content but instead demotes or pushes it down in News Feed once fact-checkers determine that it is, in fact, untrue. Additionally, Facebook will direct users to articles containing truthful information on the same subject. This helps maintain a balance and gives everyone the opportunity to consider both sides of the story.
When Does Facebook Make Exceptions to Free Expression?
But which posts should be restricted? Facebook provides two primary categories for which it makes exceptions and restricts freedom of expression on its platform:
- Personal harm (i.e. posts that pose a credible threat of violence)
- Hate speech (i.e. posts that intimidate and exclude, creating dangerous offline implications)
As the Facebook article supplements, the platform’s policies are ever changing to keep Facebook a safe space. For example, the company recently introduced a new policy that removes posts that contribute to violence. Under this new policy, the company works with threat intelligence agencies to review all posts intended to provoke violence or physical harm. Additionally, back in October, Facebook placed all ads with potentially sensitive content under human review to prevent hate speech from being distributed on the platform. This new review process came shortly after the company discovered ads targeting “Jew haters” in one instance and learned that Russian troll accounts distributed politically divisive ads during the 2016 US Presidential Election in another.
Facebook also recently announced new restrictions for ads for addiction treatment centers and bail bonds to ensure users are not tricked into paying for something when they are in their most vulnerable state. The new restrictions are in place to help prevent what could potentially cause personal harm.
“Trying to piece together a framework for speech that works for everyone—and making sure we effectively enforce that framework—is challenging,” the Facebook article concludes. “But as we make clear in our Community Standards, every policy we have is grounded in three core principles: giving people a voice, keeping people safe, and treating people equitably. The frustrations we hear about our policies—outside and internally as well—come from the inevitable tension between these three principles.”
Written by Anna Hubbel, staff writer at AdvertiseMint, Facebook ad agency