As advertisers pull away from Facebook to protest the social networking giant’s hands-off approach to misinformation and hate speech, the company is instituting a number of stronger policies to woo them back.
In a livestreamed segment of the company’s weekly all-hands meeting, CEO Mark Zuckerberg recapped some of the steps Facebook is already taking, and announced new measures to fight voter suppression and misinformation — although they amount to things that other social media platforms like Twitter have already enahatected and enforced in more aggressive ways.
At the heart of the policy changes is an admission that the company will continue to allow politicians and public figures to disseminate hate speech that does, in fact, violate Facebook’s own guidelines — but it will add a label to denote they’re remaining on the platform because of their “newsworthy” nature.
It’s a watered-down version of the more muscular stance that Twitter has taken to limit the ability of its network to amplify hate speech or statements that incite violence.
A handful of times a year, we leave up content that would otherwise violate our policies if the public interest value outweighs the risk of harm. Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.
We will soon start labeling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case. We’ll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what’s acceptable in our society — but we’ll add a prompt to tell people that the content they’re sharing may violate our policies.
The problems with this approach are legion. Ultimately, it’s another example of Facebook’s insistence that with hate speech and other types of rhetoric and propaganda, the onus of responsibility is on the user.
Zuckerberg did emphasize that threats of violence or voter suppression are not allowed to be distributed on the platform whether or not they’re deemed newsworthy, adding that “there are no exceptions for politicians in any of the policies I’m announcing here today.”
But it remains to be seen how Facebook will define the nature of those threats — and balance that against the “newsworthiness” of the statement.
The steps around election year violence supplement other efforts that the company has taken to combat the spread of misinformation around voting rights on the platform.
The new measures that Zuckerberg announced also include partnerships with local election authorities to determine the accuracy of information and what is potentially dangerous. Zuckerberg also said that Facebook would ban posts that make false claims (like saying ICE agents will be checking immigration papers at polling places) or threats of voter interference (like “My friends and I will be doing our own monitoring of the polls”).
Facebook is also going to take additional steps to restrict hate speech in advertising.
“Specifically, we’re expanding our ads policy to prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others,” Zuckerberg said. “We’re also expanding our policies to better protect immigrants, migrants, refugees and asylum seekers from ads suggesting these groups are inferior or expressing contempt, dismissal or disgust directed at them.”
Zuckerberg’s remarks came days of advertisers — most recently Unilever and Verizon — announced that they’re going to pull their money from Facebook as part the #StopHateforProfit campaign organized by civil rights groups.
These are small, good steps from the head of a social network that has been recalcitrant in the face of criticism from all corners (except, until now. from the advertisers that matter most to Facebook). But they don’t do anything at all about the teeming mass of misinformation that exists in the private channels that simmer below the surface of Facebook’s public facing messages, memes and commentary.
Jonathan Shieber
Source link