Facebook's recent suspensions of two users who were exercising their First Amendment rights exposed flaws in the social network's use of artificial intelligence to enforce its "Community Standards."
Last Sunday, October 15, in the wake of bombshell reports detailing allegations of sexual harassment and assault against Hollywood producer Harvey Weinstein, the actress Alyssa Milano tweeted an invitation: “If you’ve been sexually harassed or assaulted write ‘me too’ as a reply to this tweet.” The hashtag “#MeToo” immediately went viral, appearing in millions of Facebook and Twitter posts and reactions within 24 hours.
The concept is elegant (and predates Milano’s tweet): Empowering people to speak out is broadly valuable, and insofar as sexual harassment is enabled by victims’ silence and feelings of isolation, “uncovering the colossal scale of the problem is revolutionary in its own right.”
But this particular catharsis is limited in important ways, as demonstrated by an astounding article that photographer Deborah Copaken published on Medium.com.
By the time she was an undergraduate in the late 1980s, Copaken had been assaulted on repeated occasions: twice mugged at gunpoint, kicked unconscious, and, finally, attacked by a stranger who broke into her dorm room. These experiences prompted “an exercise in self-therapy” that eventually launched Copaken’s prolific career as a photographer. In response to strangers’ unwanted advances, Copaken would say, “‘No, thank you, but I would like to shoot your photo.'” She would then photograph her harasser, using her camera “as a weapon…turning hunter into prey.”
In 1988, Copaken compiled the images into a senior thesis, entitled Shooting Back.
And nearly 30 years later, she shared two of them on Twitter and Facebook, accompanied by the message “#MeToo, too many times to count. With photographic evidence, even.”
Writes Copaken on Medium:
The two photos I posted on Sunday night were unusual for the series, as they were acts of sexual harassment caught in medias res. One was of men in wolf masks who’d chased me down a street in downtown Boston. And then there was the man in Boston’s Combat Zone who’d said, “Hey, baby, wanna get it on,” and I’d said, “No, thank you, but I would like to shoot your photo,” but before I could approach him with my 28-millimeter lens, he flashed me. So I quickly shot the photo from five feet away and ran.
An hour after Copaken published her post, Facebook suspended her account for violating its “Community Standards” — specifically, for posting “content that threatens or promotes sexual violence or exploitation.” If announcing “me too” communicated brave opposition to sexual violence, announcing “me too” along with documentary evidence allegedly conveyed the exact opposite. Why?
Like other social media platforms, Facebook relies in part on artificial intelligence to filter out offensive photographs. And from an automaton’s perspective, it’s not hard to see how Copaken’s photographs would be problematic. The men depicted were threatening sexual exploitation.
Of course, that was the whole point.
— Deborah Copaken (@dcopaken) October 16, 2017
Copaken’s post and account were restored, thanks in part to a Facebook employee who saw her post on Medium. “I work at FB and brought this to the attention of someone internally. This happened accidentally. Your content was restored,” the employee wrote in the comments thread beneath Copaken’s story.
Accidents such as this one are an entirely foreseeable consequence of making automatons the arbiters of speech. Artificial intelligence can ferret out objectionable content, but context is another matter — AI isn’t very good at telling the difference between advocating for something and exposing it.
And that is especially problematic when it comes to a campaign that undermines oppression by empowering people to shine a light on bad behavior. If #MeToo is a flashlight on everything from the incessant natter of sexual commentary to rape, social media broadly promises a forum in which all kinds of silenced people might find each other, force confrontation, and spark improvement. But that’s only going to be possible if we can teach our robots to distinguish abuse from its unmasking.
Consider what happened this past summer, when Ijeoma Oluo, a self-described “writer, speaker & internet yeller,” half-jokingly tweeted about her fear, as a black woman, of walking into a Cracker Barrel restaurant.
At Cracker Barrel 4 the 1st time. Looking at the sea of white folk in cowboy hats & wondering “will they let my black ass walk out of here?”
— Ijeoma Oluo (@IjeomaOluo) July 30, 2017
Her social-media accounts were inundated with violent threats and racist invective, including hateful comments about her children, all-caps screeds about her mother’s “bloodline,” and private emails. Twitter was responsive when she reported the abuse, deleting threats and locking some offending accounts. Facebook, on the other hand, did nothing — at least until Oluo began posting screenshots of the abuse. At that point, Facebook suspended her account, presumably having ascribed the offensive speech to her.
Oluo recounted the debacle in a post on Medium:
So after getting absolutely no help from facebook whatsoever, I started posting screenshots of the comments and messages I was getting…. If you send me a message saying that you hope I get hit by a bus, or pushed off the Grand Canyon, and facebook absolutely refuses to hold you accountable, the least you deserve is for people to see the hate you are spreading.
And finally, facebook decided to take action. What did they do? Did they suspend any of the people who threatened me? No. Did they take down [the] post that was sending hundreds of hate-filled commenters my way? No.
They suspended me for three days for posting screenshots of the abuse they have refused to do anything about.
This – this, after 3 days of nonstop hate and abuse – is when I finally broke down crying. See, it’s not just the hate. I write and speak about race in America because I already see this hate every day. It’s the complicity of one of the few platforms that people of color have to speak out about this hate that gets me.
People are mad because my tweet rang true. Plenty of people of color are nervous entering an entirely white room – and with good reason. Even this simple expression of discomfort was too much, and hundreds of angry white people flooded my twitter, facebook and email to try to silence me. Any time people of color, especially women of color, speak the truth — we are silenced.
And facebook is helping.
This isn’t okay. I shouldn’t have to leave facebook in order to escape racist hate. I shouldn’t have to be silent in the face of racist hate in order to be able to stay on the platform.
Facebook is failing people of color, just as they are failing many feminists and transgender people, in punishing them for speaking out about abuse. And they need to be held accountable.
Click here to follow Ijeoma Oluo on Twitter, and click the link below to read her Medium post:
- Gersh v. Anglin: Can the First Amendment Shelter a Troll Storm? - January 11, 2018
- Twitter Compromises Its Free-Speech Principles in Latest Effort to Unplug Russian Bots - November 1, 2017
- Facebook Knows Your Darkest Secret (And It’s About to Tell Your Friends) - October 27, 2017