“I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos” wrote Facebook Chief Security Officer Alex Stamos on Saturday in a reeling tweetstorm. He claims journalists misunderstand the complexity of attacking fake news, deride Facebook for thinking algorithms are neutral when the company knows they aren’t, and encourages reporters to talk to engineers who actually deal with these problems and their consequences.
Yet this argument minimizes many of Facebook’s troubles. The issue isn’t that Facebook doesn’t know algorithms can be biased or that people don’t know these are tough problems, but that the company didn’t anticipate abuses of the platform and work harder to build algorithms or human moderation processes that could block fake news and fraudulent ad buys before they impacted the 2016 U.S. presidential election, instead of now. And his tweetstorm completely glosses over the fact that Facebook will fire employees that talk to the press without authorization.
[Update: 3:30pm PT) I commend Stamos for speaking so candidly to the public about an issue where more transparency is appreciated. But simultaneously, Facebook holds the information and context he says journalists and by extension the public lack, and the company is free to bring in reporters for the necessary briefings. I’d certainly attend a “Whiteboard” session like Facebook has often held for reporters in the past on topics like News Feed sorting or privacy controls.]
Stamos’ comments hold weight because he’s leading Facebook’s investigation into Russian election tampering. He was the Chief Information Security Officer as Yahoo before taking the CSO role at Facebook in mid-2015.
The sprawling response to recent backlash comes right as Facebook starts making the changes it should have implemented before the election. Today, Axios reports that Facebook just emailed advertisers to inform them that ads targeted by “politics, religion, ethnicity or social issues” will have to be manually approved before they’re sold and distributed.
And yesterday, Facebook updated an October 2nd blog post about disclosing Russian-bought election interference ads to congress to note that “Of the more than 3,000 ads that we have shared with Congress, 5% appeared on Instagram. About $6,700 was spent on these ads”, implicating Facebook’s photo-sharing acquisition in the scandal for the first time.
Stamos’ tweetstorm was set off by Lawfare associate editor and Washington Post contributor Quinta Jurecic, who commented that Facebook’s shift towards human editors implies that saying “the algorithm is bad now, we’re going to have people do this” actually “just entrenches The Algorithm as a mythic entity beyond understanding rather than something that was designed poorly and irresponsibly and which could have been designed better.”
Here’s my tweet-by-tweet interpretation of Stamos’ perspective:
He starts by saying journalists and academics don’t get what it’s like to actually like to implement solutions to hard problems, yet clearly no one has the right answers yet.
Facebook’s team has supposedly been pigeonholed as naive of real-life consequences or too technical to see the human impact of its platform, but the outcomes speak for themselves about the team’s inadequacy to proactively protect against election abuse.
Facebook gets that people code their biases into algorithms, and works to stop that. But censorship that results from overzealous algorithms hasn’t been the real problem. Algorithmic negligence of worst-case scenarios for malicious usage of Facebook products is.
Understanding of the risks of algorithms is what’s kept Facebook from over-aggressively implementing them in ways that could have led to censorship, which is responsible but doesn’t solve the urgent problem of abuse at hand.
Now Facebook’s CSO is calling journalists’ demands for better algorithms fake news, because these algorithms are hard to build without becoming a dragnet that attacks innocent content too.
What is totally false might be somewhat easy to spot, but the polarizing, exaggerated, opinionated content many see as “fake” is tough to train AI to spot because of the nuance with which it’s separated from legitimate news, which is a valid point.
Stamos says it’s not as simple as fighting bots with algorithms because…
…Facebok would end up becoming the truth police. That might lead to criticism from conservatives if their content is targeted for removal, which is why Facebook outsourced fact-checking to third-party organizations and reportedly delayed News Feed changes to address clickbait before the election.
Even though Facebook prints money, some datasets are still too big to hire enough people to review manually, so Stamos believes algorithms are an unavoidable tool.
Sure, journalists should do more of their homework, but Facebook employees or those at other tech companies can be fired for discussing work with reporters if they don’t have PR approval.
It’s true that as journalists seek to fight for the public good, they may overstep the bounds of their knowledge. Though Facebook’s best strategy here is likely being more thick-skinned to criticism while making progress on the necessary work rather than complaining about the company’s treatment.
Journalists do sometimes tie everything up in a neat bow when they’re really messier, but that doesn’t mean we’re not at the start of a cultural shift about platform responsibility in Silicon Valley.
Stamos says it’s not a lack of empathy or understanding of the non-engineering elements to blame, though Facebook’s idealistic leadership did certainly fail to anticipate how significantly its products could be abused to interfere with elections, hence all the reactive changes happening now.
Another fair point, as we often want aggressive protection against views we disagree with while fearing censorship of our own perspective when those things go hand in hand. But no one is calling for Facebook to be haphazard with the creation of these algorithms. We’re just saying it’s an urgent problem.
This is true, but so is the inverse. Facebook neded to think long and hard about how its systems could be abused if speech wasn’t controlled in any way and fake news or ads were used to sway elections. Giving everyone a voice is a double-edged sword.
Yes, people should take a wholistic view of free speech and censorship, knowing both must fairly cross both sides of the aisle to have a coherent and enforceable policy.
This is a highly dramatic way of saying be careful what you wish for, as censorship of those you disagree with could bloat into censorship of those you support. But this actually positions Facebook as “the gods”. Yes, we want better protection, but no, that doesn’t mean we want overly aggressive censorship. It’s on Facebook, the platform owner, to strike this balance.
Not sure if this was menat to lighten the mood, but it made it sound like his whole tweetstorm was flippantly produced on a whim, which seems like an odd way for the world’s largest social network to discuss its most pressing scandal ever.
Overall, everyone needs to approach this discussion with more nuance. The public should know these are tough problems with potential unintended consequences for rash moves, and that Facebook is aware of the gravity now. Facebook employees should know that the public wants progress urgently, and while it might not understand all the complexities and sometimes makes its criticism personal, it’s still warranted to call for improvement.
Josh Constine
http://feedproxy.google.com/~r/techcrunch/facebook/~3/2G9Gd3qh2Oc/
Source link