
As the U.S. Department of Justice’s special counsel Robert Mueller and both houses of Congress continue to investigate Russian meddling in the 2016 election and the country’s use of social-media platforms such as Facebook to influence America’s internal politics, the demands on these platforms to police themselves are growing louder.
The debate over how to self-police blew up recently on — where else? — Twitter, with respected Brookings Institution scholar Quinta Jurecic, a research analyst at the Brookings Institution and an associate editor of the think tank’s Lawfare blog, crossing swords with Alex Stamos, chief security officer for Facebook, who warned critics of unintended Orwellianism, ominously quoting Oscar Wilde: “When the gods wish to punish us they answer our prayers.”
Jurecic initiated the discussion by retweeting an Axios.com article about Facebook’s new ad-review policy that quotes an email in which Facebook founder and CEO Mark Zuckerberg warned advertisers that ads pertaining to “politics, religion, ethnicity or social issues” would be subjected to “manual review.”
lately have been wondering whether FB's insistence on moving toward human-reviewed ads is a red herring https://t.co/6t9UgBDNtQ
— Take Scare Clause ? (@qjurecic) October 7, 2017
Jurecic suggested that human review, as opposed to electronic vetting via algorithm, might be a “red herring,” deeming it the “flipside of treating The Algorithm as a holy, neutral god (which got us into this mess).”
In follow-up tweets, Jurecic opined that “algorithms are not neutral, they are designed” and asked rhetorically whether Facebook could “design an algorithm to review these ads more quickly than humans?” She further speculated that the problem might not be with the algorithm (the arbiter of all things internet), but rather with its maker — suggesting that Facebook’s algorithms might have been “designed poorly and irresponsibly.” Though she admitted, “I am open to being told I’m wrong about this!”
rather than something that was designed poorly and irresponsibly and which could have been designed better
— Take Scare Clause ? (@qjurecic) October 7, 2017
Stamos, who worked as Yahoo’s chief information security officer before joining Facebook in 2015, was more than up to the challenge, responding to Jurecic in an extensive and thought-provoking thread.
I appreciate Quinta's work (especially on Rational Security) but this thread demonstrates a real gap between academics/journalists and SV. https://t.co/CWulZrFaso
— Alex Stamos (@alexstamos) October 7, 2017
“I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos,” Stamos tweeted, adding, “[L]ots of journalists have celebrated academics who have made wild claims of how easy it is to spot fake news and propaganda.”
(Indeed, studies have noted the difficulty in defining and identifying so-called fake news, and distinguishing it from reliable information and/or opinion.)
“[I]f you don’t worry about becoming the Ministry of Truth with ML [machine learning] systems trained on your personal biases, then it’s easy!” he chided a little further down, urging critics to consider the “downside” of ML systems that rely on “ideologically biased training data…. If you call for less speech by the people you dislike but also complain when the people you like are censored, be careful,” he added.
Likewise if your call for data to be protected from governments is based upon who the person being protected is.
— Alex Stamos (@alexstamos) October 7, 2017
Many of the responses to Stamos’s tweetage were savage and cynical, referencing Zuckerberg’s comments after the 2016 election, wherein the tech mogul dismissed as risible the charge that Facebook had been gamed by Russian bots.
Zuckerberg ate his words after Stamos issued a statement in September revealing that Facebook had indeed published thousands of ads associated with inauthentic accounts that subsequent analysis indicated “were affiliated with one another and likely operated out of Russia.”
Though Facebook has rightly taken a credibility hit, the fact is that to the extent the Russians may have interfered with a U.S. election, they did so by weaponizing the gullibility of the social network’s users via good-old-fashioned confirmation bias and a collective decline in our critical-thinking skills.
One of the more salient replies to Stamos came from an individual who posted a link to a recent New York Times op-ed by Nina Jankowicz, a fellow at the Woodrow Wilson Center’s Kennan Institute.
Jankowicz argues that the best defense against Russian disinformation is an offense girded by training the American public in “critical reading and analysis skills for the digital age” and an investment in the Fourth Estate “to ensure that it is driven by truth, not clicks.”
Otherwise, she concludes, Americans will continue to be easy marks for online confidence men who exploit the “weaknesses of our own making.”
Anyway, just a Saturday morning thought on how we can better discuss this. Off to Home Depot. FIN
— Alex Stamos (@alexstamos) October 7, 2017
Click here to follow Alex Stamos on Twitter, here to follow Quinta Jurecic, and here to read Stamos’s extensive Twitter thread about policing content online.
- Backpage Trial: 4th Mistrial Motion Fails, Lacey Slimed, Prosecutors Say ‘Moderation, Bad’ - September 20, 2023
- Backpage Judge Shoots Down Third Mistrial Motion, Saying Prosecution Hasn’t Pushed Envelope ‘Yet’ - September 19, 2023
- Republic Columnist Cites GAO Report Showing Backpage Takedown Hurt Women, Children - September 17, 2023