Facebook hate speech button testing accidentally set a “hate speech” button live on its platform for a short period of time, a company spokesperson confirmed in a statement Tuesday. The button asked users: “Does this contain hate speech?”
The spokesperson said the company was conducting an “internal test” in an attempt to understand “different types of speech, including speech we thought would not be hate.” They said a bug caused the button to launch publicly but it has since been disabled.
Facebook Vice President Guy Rosen explained on Twitter that the button was shown on posts regardless of their content but was “reverted” within 20 minutes.
Facebook hasfor what critics say are a lack of action when it comes to curbing hate speech in developing countries. A recent report from The New York Times says violence in Sri Lanka has hit the nation particularly hard because Facebook rumors often lead to violent outcomes. The Times spoke with advocacy groups who flag alarming content to Facebook, but they say not much is being done.
The social media giant is holding its F8 Conference on Tuesday where CEO Mark Zuckerberg said he plans to share how his company is working to “keep people safe.”
Facebook has struggled for years to figure out what is and isn’t hate speech on its platform. On Tuesday, a “bug ” revealed that Facebook might be thinking about crowd-sourcing the question.
At the bottom of each post on my News Feed on Tuesday morning was the same question: “Does this post contain hate speech?” It appeared on everything from news articles to personal updates to a picture of my cat.
The question appeared – and disappeared – on Tuesday morning, as Facebook users began commenting with amusement and alarm at the variety of posts that Facebook wanted to know about.
“This was an internal test we were working on to understand different types of speech, including speech we thought would not be hate. A bug caused it to launch publicly. It’s been disabled,” a Facebook spokesperson said on Tuesday.
As for why Facebook might be testing a feature like this, a little bit of context is needed.
Just last week, Facebook finally released the guidelines it uses internally to enforce its own rules on the platform. That’s important, in part, because Facebook has long struggled to moderate content consistently, or account for the context of a post.
For instance: Facebook once removed Nick Ut’s iconic Vietnam War photograph, claiming it violated the site’s policy on nudity. The platform defended, and then reversed, the decision, after the photograph’s removal outraged basically the entire country of Norway. And minority groups have long said that they believe Facebook’s rules and enforcement unfairly punish those who try to call out hate speech and racism.
Here is how Facebook currently defines hate speech, in case you were curious:
“A direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease. We also provide some protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.”
According to the newly-published guidelines, moderators should allow content calling out hateful speech that would otherwise violate their prohibition against it, “but we expect people to clearly indicate their intent” when doing so. “Where the intention is unclear, we may remove the content.”