Substack co-founder denounces bigotry, but has no plan

Shortly after Substack announced their Twitter competitor Substack Notes Nilay Patel interviewed Substack’s CEO, Chris Best, to talk about it.

In it, Patel asked Best if the statement “all brown people are animals and they shouldn’t be allowed in America” would be censored on Substack Notes. Best refused to say that it would, and when pressed further by Patel the CEO responded saying, “I’m not going to get into gotcha content moderation” because he didn’t think it’s “a useful way to talk about this stuff.”

On April 21st, a week after the Decoder interview went live, Substack co-founder Hamish McKenzie wrote a company statement via Substack Notes:

Last week, we caught some heat after Chris didn’t accept the terms of a question from a podcast interviewer about how Substack will handle bigoted speech on Notes. It came across poorly and some people sternly criticized us for our naivety while others wondered how we’d discourage bad behaviors and content on Notes. We wish that interview had gone better and that Chris had more clearly represented our position in that moment, and we regret causing any alarm for people who care about Substack and how the platform is evolving. We messed that up. And just in case anyone is ever in any doubt: we don’t like or condone bigotry in any form.

Spoiler alert: McKenzie doesn’t have any actions or policies laid out to explain how Substack will combat bigotry. “Caught some heat” is about as bad as it gets from a company statement. It might as well have said “got caught being shitty.”

The “heat” in question were from an episode of Decoder where Chris Best, CEO of Substack, refused to say that “all brown people are animals and they shouldn’t be allowed in America” would violate their content guidelines.

It gets worse, in classic whataboutism McKenzie argues that the other social media companies aren’t doing much to fight bigotry despite their huge content moderation teams.

Facebook, Instagram, Twitter and others have tens of thousands of engineers, lawyers, and trust & safety employees working on content moderation, and they have poured hundreds of millions of dollars into their efforts. The content moderation policies at some of those companies run to something like 150 pages. But how is it all working out? Is there less concern about misinformation? Has polarization decreased? Has fake news gone away? Is there less bigotry? It doesn’t seem so to us, despite the best efforts and good intentions of the most powerful media technology companies the world has ever known.

Now, this doesn’t mean there should be no moderation at all, and we do of course have content guidelines with narrowly defined restrictions that we will continue to enforce. But, generally speaking, we suspect that the issue is that you can’t solve a problem (social media business models) with a problem (a content moderation apparatus that doesn’t work and burns trust). So we are taking a different path.

That “different path,” McKenzie explains, is “changing the business model.” How will they change this business? Basically making the creator do their own content moderation. Substack decided to look at their writers, the whole reason Substack is making money, and telling them to figure it out themselves.

Truthfully this is a bad company statement trying to walk back Chris Best’s blunder on Decoder. In fact, it made things even more rocky for Substack.

Substack is a place where writers can write what they want to write, readers can read what they want to read, and everyone can choose who they hang out with. It’s a different game from social media as we know it.

No it isn’t, this “game” is the same on Facebook, Twitter, and more. There can be simple and no-nonsense content moderation policies in place and people who disagree on the platform.

Just because you are removing and disallowing someone from saying “all brown people are animals and they shouldn’t be allowed in America” doesn’t mean that everyone will suddenly be the same ideologically. You can have rules in place to prevent violence while having a healthy discourse.

wrote before about Substack’s poorly written content guidelines and I said, “this isn’t an endorsement to spread hate but it certainly doesn’t thwart any of that kind of behavior either.” While I still believe that, the more Substack dives in to their content moderation guidelines give me pause. It makes me believe that Substack is less making a critical error and more deliberately dog-whistling.

Some people feel similarly, one being Mike Masnick at Techdirt. He explained the Nazi bar story on Reddit and how, with comments like Best’s on Decoder, Substack is allowing more Nazis to come in to Substack’s metaphorical bar.

But Substack is a centralized system. And a centralized system that doesn’t do trust & safety… is the Nazi bar. And if you have some other system that you think allows for “moderation but not censorship” then be fucking explicit about what it is. There are all sorts of interventions short of removing content that have been shown to work well (though, with other social media, they still get accused of “censorship” for literally expressing more speech). But the details matter. A lot.

If Substack truly wants to be a place for everyone to come and discuss things that matter to them, they cannot continue with this hands-off approach. Content moderation is messy, and it isn’t easy to handle. That being said, Substack needs to roll up their sleeves and embrace the mess. Otherwise they will drive more and more people off of their platform.