Substack made me—and many others who host their newsletters here—mad again this week. Those of you who have been here for a while know this newsletter started on Tinyletter (RIP) in 2015, moved to Substack in 2019, left for Twitter-owned Revue in 2021 after concerns about Substack’s (lack of) content moderation, and came back to Substack with its tail between its legs when Elon Musk shut down Revue late last year. My frustrations with Substack are well documented.
So let me start with something different: let me sing its praises. Substack is doing many important things right. It has become a source of income and exposure for so many writers—particularly a few of my friends who have been struggling due to the journalism industry’s tailspin. In that way, Substack is attempting to atone for the sins of its brethren, injecting life into a field that has been on a ventilator since the advent of social media and digital advertising. Substack is creating space for the creatives, allowing us to connect with one another, and, perhaps most importantly, it’s engendering a different type of internet, too. Substack is encouraging everyone who uses it to engage with a slower side of the internet we haven’t seen since we all had dial up modems.
That promise is precisely why I and many other writers are incensed that Substack is once again doubling down on the monetization of hateful content—in particular fairly overt white supremacism—on this platform. This place has the infrastructure to be something beautiful, and instead it is actively deciding to be ugly, and to profit off of ugliness.
For those that don’t know what Substack is or does besides ensure that this column lands in your inbox whenever I decide to write it, it does a lot more. It allows writers to recommend each others’ work to their subscribers. It provides a place—through “Notes,” Substack’s answer to Twitter’s demise, and “Chats,” which is essentially a public group conversation—for writers to engage more directly with their audiences. It recommends new pieces to readers in a “Weekly Stack,” based on articles they’ve engaged with in the past. It allows writers to monetize their newsletters, collecting a monthly subscription fee for their thoughts—and takes a 10% cut. And therein lies the problem.
A group of Substackers—247, at last count, including some with bestselling publications—got together last week to ask Substack’s leadership “Why are you platforming and monetizing Nazis?” (I recommend you read the whole letter, which lays out Substack’s Nazi problem very succinctly. Also take a look at the piece by Jonathan Katz in The Atlantic that inspired the letter, and describes some pretty scary evidence of the neo-Nazi content the platform hosts.)
Substack’s leadership responded to the letter today (in a Note, rather than on the official Substack newsletter, which felt deliberate. It seemed to be both a dig at the letter writers, who asked the leaders to respond on the official Substack newsletter, and also ensured fewer people saw it).
I had a very visceral reaction to this response. It’s poorly written, poorly argued, and ideologically inconsistent. It is tempting to go line by line and take it down, but instead let’s dial into a few key assertions:
Hamish writes:
I just want to make it clear that we don’t like Nazis either—we wish no-one held those views. But some people do hold those and other extreme views. Given that, we don't think that censorship (including through demonetizing publications) makes the problem go away—in fact, it makes it worse.
I. The “Censorship” Ruse
Let’s start with Hamish’s choice of the word “censorship.” There’s a difference between enforcing your platform’s terms of service (which, as they stand, clearly prohibit “initiatives that incite violence based on protected classes”)—something everybody else in the social media business calls “trust and safety,” which is achieved through “content moderation”—and “censorship.” Notably, prohibiting individuals from profiting off of hateful content, which is what the writers of the letter to Hamish and Co. have asked—is not “censorship.” The content is still there, searchable and accessible by anyone who wants to access it. But it’s fashionable (and profitable)—particularly on Substack!—to claim that content moderation is censorship, so that’s what Hamish did.
II. Does “Censorship” Make the Nazi Problem Worse?
Does content moderation, demonetization, or deplatforming—which are all different things, all not censorship, and all things you, as a user, expressly consent to when you sign up to use private social media platforms—“make [the problem] worse,” as Hamish asserts? The science, in fact, is not super clear on this. A 2020 paper examined what happened when notorious subreddits r/The_Donald and r/Incels were banned from Reddit for repeatedly violating the online platform's rules against harassment, hate speech and content manipulation. The authors of the study find that “moderation measures”—interesting how the authors don’t call it censorship, right?—”significantly decreased posting activity on the new platform” to which users migrated, “reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community.” In short, the community shrunk, but some of the guys in it seemed to get nastier. So deplatforming may not expunge hateful communities altogether (and, to be clear, that’s not what the letter authors are asking of Substack, anyway) but...
III. Deplatforming DOES make the user experience better.
...it may have a positive effect on user experience where the rest of us are, which is presumably important on a platform with leadership that allegedly cares so much about “individual rights and civil liberties,” right? Hamish asserts that “history shows that censorship is most potently used by the powerful to silence the powerless.” Journalist Brandy Zadrozny wrote on Threads this week about Tucker Carlson’s deplatforming from Fox News and the effect it has had on her personally:
In 2020, Tucker Carlson did the first of many segments on me. I was doxxed and got so many credible threats I and my kids needed around the clock security. Today Tucker highlighted my past reporting on Twitter and real world violence against trans people. He made fun of my glasses and my voice. And I only knew about it because a friend called me to laugh about it. Deplatforming works!
By Hamish’s scale, who is powerful and who is powerless in Brandy’s case?
I’ve had my own experiences with deplatforming, when people who have sent me violent threats, incited harassment against me, or otherwise violated terms of service have been kicked off platforms or had their accounts locked. I’ve also had really frustrating experiences when platforms don’t take those threats seriously, usually because they don’t contain a “direct” incitement to violence. Intrinsically, I feel less safe on those platforms. I find myself posting less there. Substack leadership ignores the fact that a post need not contain a direct, credible threat to incite a follower to offline violence, or to trigger networked harassment against a target. This can be particularly scary when you have no idea where the hate is coming from, like, for instance, behind a paywall on Substack. (Yes, this has happened to me multiple times, [edited Dec 26 to add:] including in the days after publishing this post!]
So the question becomes, Hamish and friends: whose speech are you defending? The folks with the swastikas? Or the marginalized communities they attack?
IV. Money, money, money
The reality, of course, is that Substack is not defending speech. It is defending its bottom line. This is clear when comparing Hamish’s post about Nazis with Substack’s long-held stance on pornographic content:
We don’t allow porn or sexually exploitative content on Substack, including any visual depictions of sexual acts for the sole purpose of sexual gratification. We do allow depictions of nudity for artistic, journalistic, or related purposes, as well as erotic literature, however, we have a strict no nudity policy for profile images. We may hide or remove explicit content from Substack’s discovery features, including search and on Substack.com.
Free speech for me, but not for thee, if thou art a sex worker.
laid out this comparison back in April, after Substack CEO Chris Best waffled when asked if he would “censor” the statement “all brown people are animals and they shouldn’t be allowed in America.” Meg wrote: “Allowing this antisemitic meme on here is not a free speech decision. It’s a business decision.” She goes on:Legally made porn is protected free speech. There is plenty of ethically made, non-violent pornography. AND porn is the only truly proven online subscription business. But Substack has decided it does not work for their business model.
Overt racism, overt misogyny, overt transphobia, overt antisemitism is also protected free speech. There are no ethically made, non-violent antisemitic conspiracy theories or calls to kick all brown people out of the country. But Substack had decided that *does* work for their business model.
And they’ve just doubled down on that decision, even if they’re pretending they’re not. As
notes:Substack is engaging in transparent puffery when it brands itself as permitting offensive speech because the best way to handle offensive speech is to put it all out there to discuss. It’s simply not true. Substack has made a series of value judgments about which speech to permit and which speech not to permit. Substack would like you to believe that making judgments about content “for the sole purpose of sexual gratification,” or content promoting anorexia, is different than making judgment about Nazi content. In fact, that’s not a neutral, value-free choice.
V. *You’ll* never see Nazi content, so just hold your nose and pretend this platform doesn’t stink!
Hamish links to a letter by a bunch of people who fancy themselves free speech defenders to provide evidence for the ludicrous statement: “while not everyone agrees with [Substack’s] approach, many people do.” (What an argument...“While not everyone agreed with Charles Manson’s approach, members of the Manson family did!”)
The letter’s most plausible argument is that “Substack has come up with the best solution [for hateful content] yet: Giving writers and readers the freedom of speech without surfacing that speech to the masses.” The letter writers assert that you will never be exposed to Nazi content on Substack if you don’t go looking for it, or you aren’t already interested in it. The authors of the letter don’t have to see Nazi content every day, so it can’t be that bad, right?
First of all, the basic claim that Substack is not surfacing this content to the masses is simply not true. As I mentioned earlier, users get a “Weekly Stack” email that serves them content generated, presumably, on a graph related to their current subscriptions and engagements. (Yes, you can opt-out of this if you want, but it does appear to be turned on by default.) There is an “explore” tab on Substack’s app that surfaces similar Substack Notes content to users. (Perhaps the letter writers know more than I do about the app’s usage and it’s miniscule, but Substack is investing in it, so it appears to think it’s the future.) If you’re already subscribed to one newsletter with Nazi leadings, the likelihood you’ll be suggested another through these means is high. (Personally, I have a problem with allowing Nazis to easily build networks and lists which they can then use to organize, but it seems neither the letter writers nor Substack leadership share that concern.) Further, Substack incentivizes subscriptions to and engagement with bestselling publications. They get badges. They’re on leaderboards. And they’re getting more subscriptions driven to them—and generating more revenue for Substack—that way.
All roads lead to money. The authors of the letter in defense of Substack’s inconsistent, flimsy policy on the monetization of hateful content are trying to make an argument rooted in free speech principles, as is Substack leadership itself. But this is an economic argument. Nazis make money for Substack, like they did for nearly every platform before it.
VI. What next?
I’m not sure where this leaves the community of writers who don’t want to post their work on a platform that actively profits by hosting Nazis. I have seen some note that they are weighing their options to move their newsletters elsewhere.
My newsletter has been free for over a year, in part because I did not want to buoy Substack leadership’s poor decision-making by allowing them to siphon away 10% of my already quite small revenue. (The other part is that I write extremely sporadically these days. Thanks, motherhood!) Substack is a fun side hustle for me, not my job. But it has become others’ main source of income. I pay for subscriptions to several publications from writers I love dearly, and I would prefer not to cancel them, just as I know they would prefer not to leave a place they have felt so many good vibes in an internet filled with bad ones.
It seems one option is for writers and readers who are fed up to band together and form an alternative boycott; do not leave the platform entirely, because we all value this community, but instead run your payments through another platform. It will not be as seamless and integrated as the service that Substack provides, but on the bright side, writers would no longer be sharing their revenue with a company that has cotton candy wrapped in a pixelated printout of the First Amendment as a standin for values.
For now, I’m staying put, and I will continue to make noise about this issue in hopes that a change is made. Substack doesn’t care what I think (in fact, one of their higher level employees quite publicly bought into the lies about me and my former government job, so I can only imagine what leadership believes about me!), but they might care about what we all do.
Read also:
’s very good essay, which compares Nazis on Substack to mouse poop in cereal.
I’m so glad you wrote this. Been talking about this with other trust and safety folks as I’m struggling with this as well. This came up too with Meta accepting 2020 election denialism ads. I wish leaders of these companies would better outline their thinking process and what tradeoffs they were weighing when making these decisions.
For instance, I think they both can't afford the operational costs to have more nuanced content moderation, but also want to stick to their strict free speech principles for as long as possible so as to not open up more pressure for them to have more nuanced policies. I do think they'll eventually be forced into it like every platform has.
This is an interesting question too of whether it would be better that they have more nuanced policies even if they can only reactively enforce or try to hold out as long as possible. They are still very green in working on many of these things so I don't know how nuanced their thinking even is. 100 percent agree the post could use some comms polish too.
I’m staying on Substack too, for now, as I agree about continuing to make noise and talk about these issues.
Thank you. Well said. Now, is there a way to pinpoint, or at least estimate, how MUCH money Substack makes from (and for!) Nazis? Maybe we can shame them with a dollar figure...