From the moment I was first accused of murder and painted in the tabloids as a sex-crazed, drug-addicted psychopath, the death threats began rolling in. I had only been in prison for a few weeks. The authorities were still investigating the death of Meredith Kercher, and as a suspect, the police were holding me in a cell where I would remain for eight months before I was officially charged and my trial began. Long before the public saw any evidence, long before I had a chance to defend myself in court, I had become a magnet for hate.
The first death threat wasn’t even for me. It was for my mom. I read that letter in my cell, and at the end of a long screed, this stranger said he knew where my mother was staying in Perugia, and that he would kill her—that was what I deserved. I immediately told the prison guards. You have to do something! My mom needs protection! Just ignore it, they said. I did, until the next threat came, and the next.
After four years in prison and eight years on trial, I was definitively acquitted by Italy’s highest court, but I remained a magnet for hate. As I tried to rebuild a semblance of a normal life back home, the threats and harassment continued. They arrived in the mail at my mom’s house, and in the comments on my blog. Amidst the kind messages from supporters, there were always the ones wishing me dead, the ones describing how it would happen, an unmarked van in the middle of broad daylight, Meredith’s name carved into my body. I tried reporting this to the FBI. But, there was little they could do. As I ventured into the world of social media that had blossomed while I was locked away, I found a torrent of hate aimed in my direction.
I took Monica Lewinsky’s advice: The block button is your friend. But what do you do when there are thousands of people who irrationally hate you for something you didn’t do? What do you do when the most strident and committed of trolls will make account after account just to harass and threaten you?
Online abuse has been a steadily growing problem for the past decade. As of last year, 41% of US adults, and 64% of those aged 18-29 reported experiencing online abuse. 12% have experienced sustained harassment, sexual harassment or online stalking, and 18% received physical threats. It’s hard to imagine this level of toxicity in any physical space we frequent, like the park, the grocery store, or the airport.
The novel attributes of the online world—anonymity, immediate contact over vast distances, cultural echo chambers, virulent misinformation, among others—have created a unique set of challenges for countering abuse and encouraging prosocial behaviors. Over the last few years, those countermeasures have been improving.
Last summer, Instagram improved its block feature to allow users to not only block the harasser, but to block any future accounts they may make. While that’s not foolproof, it’s a huge deterrent, and for someone like me who has suffered ongoing targeted harassment for the last decade by trolls who make new accounts with the sole purpose of calling me psychopath and killer before I block them, this is a huge help.
And both Instagram and Twitter offer to filter out potentially abusive DMs from your sight, which means the hate might be there, but at least you’re not seeing it in that venue. But blocking and filtering is only half the battle. For someone like me, who has already blocked hundreds if not thousands of people across my social media accounts, the block is akin to swatting a mosquito; there’s always another one waiting to bite.
And as with mosquito control, if you’re putting on bug spray, you’re already fighting a losing battle. Prevention is far more effective, altering the environment to discourage mosquito breeding. With social media, that means changing the incentive structures that encourage casual abuse and harassment. It means trying to recapture some of the built in features of the analog world—like personal reputation—that make this kind of abuse far more unlikely in the grocery store.
Elon Musk has said he would change the Twitter verification system to authenticate all real humans in an effort to purge bots and lessen the abusive behavior enabled by anonymity. I would welcome this change, or at least the option for anyone who desires to be verified and have their online identity tied to their actual name and analog reputation. This would allow those who have good reasons to remain anonymous to stay so, but would also allow someone like me to restrict my interactions to only those with verified accounts, which would significantly decrease my need to swat mosquitoes with the block button.
But even lifting the veil of anonymity is no guarantee that people will behave in prosocial ways. The ability to report abuse is still crucial. Blocking an abuser protects you, but it doesn’t protect the next person, or make the environment less amenable to trollish behavior in general. Reporting abuse is supposed to help root out the assholes. You get enough strikes against you, and you get suspended or banned from the platform.
I’ve taken to reporting abuse whenever I block someone for harassing me. I’ve done it hundreds of times, and it is an extremely rare occasion that Twitter or Instagram actually takes action. What I almost always hear from Instagram: “We’ve reviewed your report and found that the reported content does not violate our community guidelines.” And from Twitter: “We didn’t find a violation of our rules in the content you reported.”
Why is this? I can assure it’s not because the behavior I’m reporting isn’t abusive. A big reason why is that targeted harassment is personal and specific, and algorithmic blanket solutions can’t address it very well. Twitter’s reporting process is illuminating in this regard. When you report a tweet, and specify that you are the target of abuse, you are then asked to choose how you’re being abused. Your choices are limited. I am being “Attacked because of my identity,” or “Harassed or intimidated with violence.” You can then specify how you are being abused. This person is “wishing harm upon me because of my identity,” or “spreading fear about me because of my identity,” or “encouraging others to harass me because of my identity.” These things happen to me on a daily basis because of my identity as Amanda Knox.
But when Twitter says “identity,” they don’t mean, “who I am.” They mean, “what categories I belong to.” When you select one of those options, you are forced to choose what identity category you are being targeted for: race, religion, sexual orientation, disability, disease, or age. If you can’t select one of those, you are forced to click “No, my identity isn’t being targeted.” This is, needless to say, quite frustrating. But more importantly, it reveals the limitations of an abuse reporting system that is designed from a foundational principle of group-based harassment. And while attempting to counter harassment based on gender, race and so forth is surely important, the most severe forms of online abuse are persistent and specific and targeted at individuals not for their cultural identity but for their personal identity.
In my case, it’s incredibly easy for targeted harassment to slip through the algorithmic abuse filters. I routinely get targeted with messages like “We know you did it,” and “How do you sleep at night?” Or even just the word “Meredith,” the name of my friend and roommate who was brutally murdered by a man named Rudy Guede, a name which recalls my wrongful conviction for her murder, my years in prison, and the fact that I continue to be perceived as responsible for this man’s horrific crime. Recently, a clever troll just tweeted pictures at me of the house Meredith and I lived in. That’s a deep cut in the world of harassment. To the algorithm, it’s just a picture of a house, but I know, as does the troll, what that house—and crime scene—represents.
When attempting to report this kind of abuse, some social media platforms offer the ability to provide context, and I do. Repeatedly I explain: When this person says, “We know you did it,” they are effectively calling me a killer and referencing my wrongful conviction for murder. Inevitably, the response comes back: this does not violate our community guidelines.
My case is extreme. But it’s not hard to imagine how specific harassment can operate against anyone. A troll could tweet a picture of Harvey Weinstein at Asia Argento, or even just a series of film posters for Pulp Fiction, Shakespeare in Love, The English Patient (all Weinstein films). But I’m not just talking about well known people.
I remember well how the boys in my high school speculated openly about me and the other girls—Is she an S? No, maybe she’s a T? I think she’s an F. We were mortified when we discovered they were talking about our pubic hair, whether we were shaved, trimmed, or had a forest. If teens are clever enough to demean each other in front of teachers like this, it’s all the easier to do so through social media. And it’s the specific harassment that hurts more. I once had an embarrassing menstrual moment in Algebra class. Had something like that happened to a teen today, how easy it would be for a cruel classmate to slip through the social media abuse filters with a targeted phrase like “seat stainer.”
What can we possibly do about this? It’s important that the abuse reporting systems don’t get weaponized by bad actors themselves to get non-offenders booted off of platforms for benign behavior. And for that reason, the social media platforms have an incentive to ignore potential abuse claims that they have no way of verifying. But the targets of harassment, like myself, often can predict in advance how they’ll be targeted. I know that people will directly and indirectly call me a killer, hold me responsible for my friend’s death, they will name her, or reference details about the case, or crime scene. If Twitter, Instagram, and other platforms empowered users to preemptively specify how they might be targeted and why, or if they learned from prior abuse reports to detect patterns, they would have good reason to believe that some later instance of specific harassment is legitimate.
Twitter and Instagram are opaque about how much abuse one user must commit to be suspended or banned from a platform. The other major revolution that would help mitigate online harassment is if your public reputation was visible to your online community. There’s a reason that health inspections at restaurants have to be posted, not just kept in some filing cabinet in the city archives. When an abuse report is confirmed, perhaps it’s not enough to give that user a warning, or a temporary suspension until they delete the offending post, but a visible demerit, a drop in their reputation score.
We certainly don’t want the social media companies creating a de facto social credit system in the style of China or Black Mirror, but transparency would restore some of the analog reputational effects that make your local bar or restaurant a nontoxic place. If someone gets booted one night for being disorderly, we all see it, and we remember. There’s a mark left by your name when you harass someone in the real world. Why should Twitter be any different?