I’m really tired of Twitter, and of Elon Musk, and of news cycles that have anything to do with either of them. Like Casey Newton, I believe Meta’s Threads probably has the staying power to displace Twitter in the long run, so as I sit down to write about a recent Twitter policy blunder, I’m asking myself why I still care what the bird site or its executives have to say about anything at all. The conundrum is, when Twitter lies or willfully obfuscates or gaslights, it still matters. It still affects the way its users behave on (and I’d wager, off) the platform, it still impacts the way other tech moguls behave (see, for instance, the appearance of paid-for verification on Instagram and Facebook), and it becomes part of the global conversation about the topic in question.
So when Twitter CEO Linda Yaccarino—the new fall woman for Musk’s blunders, hired to captain a sinking ship—makes absolutely wild, misleading, and gaslighting assertions about Twitter’s “progress on reducing the spread of hate speech” as she did this week, it matters. She is responding to an article in Bloomberg alleging that “Twitter’s Surge in Harmful Content [is] a Barrier to Advertiser Return.” Citing research from three reputable organizations, Bloomberg laid out how hate speech, reports of harassment and extremist content, and COVID-19 misinformation have all increased since Musk bought the platform last fall. Yaccarino and Twitter’s brand accounts posted a few screeds in response, attempting to call into question the reporting through vagaries, rather than engaging with substance.
As someone who both studies online harms and has been on the receiving end of tens of thousands of hateful tweets, including those that reveal private information about me and my family, I have some perspective that might be helpful to Yaccarino as she feverishly attempts to bail out the cold water steadily rising around her.
1. It’s easy to make assertions about enforcement going on behind the scenes, but until you give researchers and reporters access to up-to-date data, nobody’s going to believe you. Yaccarino asserts that “more than 99% of content users and advertisers see on Twitter is healthy.” She also writes that “each step of the way, Twitter has been more transparent about this work than other platforms.” This is categorically false. She doubles down: “groups outside of Twitter have validated our impact,” she claims. Sure, some groups have validated it, but only those who are not remotely critical of the platform.
And that’s the key, really; we have no way of verifying Yaccarino’s claims, because this spring Twitter shut off researchers’ free access to the platform’s application programming interface (API)—the entryway for researchers and developers to talk to the Twitter app in order to gather data on user behavior or create services. This is particularly sad because, before Musk, Twitter used to be the closest the tech world had to a paragon of transparency. It put Meta’s takedown reports on foreign interference to shame, releasing databases of tweets that had been actioned for having been related to foreign interference activities for anyone to dig around in. While this meant that a lot of scholarship and reporting relied far too heavily on Twitter data, given how comparatively few people used the platform, now we’re all attempting to cobble together a picture of what’s going on from admittedly incomplete or stale data. I believe this choice was intentional. If Twitter really wanted to make the case for having improved the hate speech situation on the platform, it would give non-profits free access to the Twitter API.
2. Impressions are the absolute wrong metric by which to measure the effect of hate speech. In social-media-analytics-speak, an ‘impression’ is the number of times a specific piece of content has been seen. Yaccarino argues that because fewer people have seen hate speech on Twitter, the health of the conversation on the platform is improving. She also seems to assert that because hate speech impressions are allegedly falling, that the quantity of hate speech on the platform cannot also be rising. Unfortunately for her, the two are not mutually exclusive. Twitter’s “freedom of speech, not reach,” policy may be reducing the amplification of hate speech (and to be clear, this is a good thing), but there still might be more of it getting sent.
More critically, Yaccarino’s “speech, not reach” policy misunderstands the goal of hate speech: to make a person or a group of people feel hated. If I am sent a rape threat, or a picture of myself as a decomposing body with a nail through my head, or am told I’ve committed treason and deserve to pay the ultimate price (and yes, these are all things people have sent me!), it doesn’t matter much to me if four or four million people have seen those tweets. Maybe this is counterintuitive, but what matters is that someone took the time to send it, to craft a sentence or an image they thought would be maximally hurtful or frightening. When you receive tens of thousands of harassing, hateful, or violent messages, even if yours are the only pair of eyes that ever reads them, they can still achieve their purpose: to silence you. It’s death by a thousand cuts. Yaccarino and her team would do well to consider whose freedom of speech they are defending with this policy. They certainly aren’t defending victims of abuse, but they are gaslighting us.
3. If you want to attract advertisers to your platform, consider policies that don’t drive users away. What unsettled me most about Twitter and Yaccarino’s doubling down on the Bloomberg piece was that for them, this isn’t about people—it’s about dollars and cents. That’s why they’re making this nonsensical, impressions-based argument: impressions are also the way that the success of ads are measured, and Twitter’s ad revenue is in the toilet.
Half of Twitter’s blog-length tweets (which are, by the way, littered with embarrassing formatting errors and missing links) are geared toward advertisers worried about “brand safety.” They feverishly underline to potential partners: “don’t worry! your content won’t be seen next to hate or abuse or child sexual abuse material!” What they forget is that advertisers aren’t only thinking about where their content might show up; they’re wondering if anyone will look at it. Anecdotally, people seem to like Threads and BlueSky because people are nicer, and, at least on the former, content moderation seems to be pretty active. Why should they log back onto a platform where a Bitcoin bro with an $8/month blue check can freely harass them, so long as only a few people see it?
—
Yaccarino and Musk aren’t the only people employing harmful, flawed arguments like these. Early in my time in the counter-online-harms space, I was at a conference where an older, white, male academic earnestly made an argument that since hate speech only made up a small percentage of content online, it couldn’t really be that bad. I asked him how many violent or sexual threats he’d received in his life: none. It only takes one, though, to change how someone expresses themselves online or how they move about in the world.