204 Comments
Comment deleted
Expand full comment

It is somewhat effective. Sometimes quite effective in fact. But if you're going to censor, you still have to do it in a way that doesn't build a growing, increasingly savvy opposition, nor in a way that increases social unrest. At least if the goal is a healthy society anyway.

Because the truth also is, the far right is far more visible to me now than it ever was during the "Wild West" era of Internet. The tensions and conflicts in politics are more unstable, and the leaders and media less trusted as they continue to grip tighter and tighter to control the message. And those consequences have leaked out far beyond any narrow effort to control violent extremism. So that "effectiveness" has a cost, one that I don't think is worth paying. Especially since those tools are never contained on the people that "deserve it".

Expand full comment
Comment deleted
Expand full comment

Trying to assess people's feelings online and why they believe what they believe is a tricky thing. I will say, keeping to Kiwifarms specifically though, a lot of their move toward far right/unorthodox/extreme/outside viewpoints was - from what I saw - to a large extent due to refugees joining from various other sites that were either alienated from the changing culture online or outright banned from other sites for whatever reason. And, despite the site being relatively open in terms of viewpoints, it still created a very visible echo chamber which inevitably radicalized people further, as well as a besieged mentality where these people's increasingly radical views in an increasingly censorious landscape were finding little else to call home. And Kiwifarms, through their archiving and doxxing, provided a weapon to fight the tide. Or, more often, to laugh at the idiots as they watched the world burned.

So yes, in this case, I would say censorship was a contributing factor. Not the only reason of course. Kiwifarms's culture seemed to be in a unique place to take in these new members. And, it is worth pointing out, not every member had extreme, let along violent views, nor was it a reflection of the whole culture. Plus, as many have pointed out, there was a lot of good they did by exposing a lot of heinous, yet protected behavior. But censorship - or perhaps more specifically control over what views are right and wrong - I do think was a contributing factor that led to the site becoming more extreme. And I suspect it's a contributing factor for the continued radicalization for people in all political directions as well.

Expand full comment
Comment deleted
Expand full comment

As far as I know there is no law forbidding you from looking up how to build a bomb. Actually building one and deploying it? That's another issue.

Expand full comment
Comment deleted
Expand full comment

Trivial.

Expand full comment
Comment deleted
Expand full comment

Kiwi Farms is a museum of our internet culture.

Sorry to see it go.

Expand full comment
Sep 6, 2022·edited Sep 6, 2022

I think the problem with those types of sites is the subset of users who read them un-ironically.

Expand full comment

Take a big helping of safetyism and a Do Something attitude, and you’ll be playing the internet extremism whackamole banning game forever.

Expand full comment

I mostly agree with this, and yet...I am part of another Substack (which shall remain nameless) in which there is zero moderation in the discussion threads. That sounds good and free and all that, but then a handful of absolutely insufferable people saying bigoted things makes any sort of real discussion nearly impossible.

I know there is a slippery slope with this kind of thing, but I sometimes think that's better than the precipitous drop of simply allowing neo-Nazis to turn a discussion board into a rhetorical hellscape. Maybe tools like the good-old "ignore" button would help, but I can't help but think that a little moderation is a good thing.

Expand full comment

Sure. But this article is talking about a situation more like removing the entire substack website from the internet because of grey mirror being on here, with some silly assumption that this would defeat NRx once and for all.

Expand full comment

Moderation can be defined in rather clear terms and will almost always improve a community compared to nothing. I mean, forget about even Nazis, the spambots alone would kill any discussion.

It's a slope you don't have to slide down. Larger platforms like Twitter choose (or are pressured) to do so. And I single out Twitter because it clearly has some of the lowest quality conversation, and high rates of harassment for that matter, of the major sites despite their rules and enforcement.

Expand full comment

The rules at twitter are very selectively enforced. The most heinous verbal abuse, along with literal death threats, are permitted from the woke, whereas the slightest pushback to the woke agenda often results in a ban for promoting "hate".

Expand full comment

Has someone done a large scale analysis of this as a trend rather than presenting a compilation of examples?

I have no doubt that examples exist that match the pattern you describe, but I also have no doubt that there are counterexamples in the opposite direction (excessive removal of content from and/or bans of the woke-coded plus toxic/threatening content from the antiwoke staying up).

It’s an imperfect system processing huge numbers of cases generated by humans in all our weirdness, so it will fail sometimes. What data are there suggesting an ideological lean one way or the other?

(This is coming from a place of genuine curiosity from someone who doesn’t get too involved in those particular corners of the internet.)

Expand full comment

"absolutely insufferable people saying bigoted things"

But herein lies the problem. Wokesters declare anything that doesn't fit and promote their narratives to be "absolutely insufferable people saying bigoted things"

I'm not saying you are a wokester, nor do I have any idea what substack you are referring to, but if you feel there should be censorhsip of comments on that substack, maybe it's not for you? I got tired of NYT commenters who are "absolutely insufferable people saying bigoted things" - so I solved the problem by not reading the site or the comments.

Expand full comment
Comment deleted
Expand full comment

No one is obliged to subscribe/read/comment (or read comments!) any media source, web site, Substack, Twitter howler monkey, or whatever. Only so many hours in a day. And, I actually go outside for hours at a time. If someone wants to yell at me in a heterodox space for not being there? well, knock themselves out.

Expand full comment
Comment deleted
Expand full comment

I thought it was about Freddie's post which is about the futility of censoring "hate sites" (whatever "hate sites" means). Yes, Twitter is not obliged to let anyone tweet, and I'm not obliged to read any tweets or join twitter. works for me.

Expand full comment

Heh...I am most definitely NOT woke, but even I draw the line at comments like, "All gay people are pedophiles." I'd feel the same if someone said "All black people are [insert name of undesirable trait.]"

Expand full comment

But in terms of a solution isn't the best choice to leave and find another forum?

Expand full comment

I don't think so, no; otherwise, we're yielding every discussion space to the most deplorable participant.

Again, I don't know a hard-and-fast principle to apply to comment moderation, but I have seen what happens where there is none, and that's not very good, IMO.

Expand full comment

No, you can always decamp to a forum that does feature moderation. Like this one, for example.

Plus if you don't like the Nazis what do you do about the message boards that are specifically devoted to white supremacy? You can't go to their forum and complain that they're on topic by posting on their theories of racial supremacy.

Expand full comment

I'm gonna agree with Slaw on this one.

Expand full comment

You know this censorship has nothing to do with getting rid of mean sounding comments right?

It's about the power and the ability to control what is even allowed to be spoken publicly.

Do you want to live in a world where corporate oligarchies can silence you the moment you step out of line or say something they deemed as "dangerous"?

Don't think that you're so good and harmless that the woke mob and the technocrat censors won't come after you next.

This has nothing to do with enforcing "decency", this is about maintaing their control and destroying anything that threatens their power.

Sorry this all sounds paranoid and like something from Qanon, but you'll understand soon enough.

Expand full comment

The difference is that Substack is not a de facto monopoly, Substack does not enjoy network effects.

Expand full comment

You can’t take away the internet from the users of sites like this, but there is value in making it harder for them to be found by new members and legally punishing the worst actors

Expand full comment

The point is network effects. The reason people use Twitter is not because its functionality is better than any other site, it's because there is an audience. Censoring people off Twitter means that they are basically talking to themselves, which is why goodthink types are so eager to do so.

Censors also point out that Twitter is a private company (although Twitter and other tech giants censor at the behest of and in cooperation with government organs) and doesn't owe you a platform. That said, natural monopolies such as the electric company cannot cut off your service, even if you say things that they don't like, such as opposing the latest rate hike.

Finally, censors claim that denial of Twitter access isn't really "censorship" since you can take your message elsewhere. The whole point of speech is to be heard. By that logic, you also had freedom of speech in the Soviet Union, as long as you were entirely alone.

Expand full comment
Comment deleted
Expand full comment

Hm....

Expand full comment
Comment deleted
Expand full comment

I see that proposed a lot and I'm generally in favor of it. But I don't think it'll work practically because centralized platforms reduces user friction. Like we have decentralized publishing already it's called RSS. But here we are on substack.

And even if the protocol is decentralized you could still end up with a defacto platform wrapper around the protocol. See: gmail, opensea etc.

Expand full comment

“ Finally, censors claim that denial of Twitter access isn't really "censorship" since you can take your message elsewhere. The whole point of speech is to be heard. By that logic, you also had freedom of speech in the Soviet Union, as long as you were entirely alone.”

But you never had the right to be heard before. If the local newspaper, billboard company, radio station, TV station didn’t want to share your views then you were left with your family and friends.

Expand full comment

You just don't have the same rights that everyone else has.

Expand full comment
deletedSep 7, 2022·edited Sep 7, 2022
Comment deleted
Expand full comment

That's not what I said, although it's not as if you have to meet certain standards to get telephone hookup, for instance.

Expand full comment

Or the soapbox down at the city square. Or by pestering people at the airport (I remember those days) or pretty much any other public place.

Expand full comment

> although Twitter and other tech giants censor at the behest of and in cooperation with government organs

In the US that would generally be a First Amendment violation, as I understand it.

As in, it's not illegal for Twitter to censor you, but it's illegal for the government to ask them to censor you, it it would be illegal for the government to do that censoring itself.

Expand full comment
Comment deleted
Expand full comment

Basically: Will no one rid me of this troublesome priest?

Expand full comment

https://mobile.twitter.com/vincecoglianese/status/1565397626018750464?s=12&t=7BNCXCnmBRXUITjrtUmkhQ

The Constitution has been a dead letter for a long time now.

Expand full comment

The Constitution doesn't keep the government that's set on doing so from doing things it forbids; it just lets you win a court case in the aftermath, assuming the courts are not complicit....

But yes, I am very interested in how these document releases will shake out in the courts.

Expand full comment

I doubt any of this ever will reach the courts.

Note how the censors act as government agents when they want to use the authority of the state, yet claim to be acting as purely private entities when they want to dismiss pesky civil rights suits or ignore unwanted election results.

That, BTW is I suspect the real reason that Biden's "disinformation board" was shelved. Much easier to outsource the job to the tech companies, and you don't have to worry that a new president will appoint board members who won't play ball.

Expand full comment

The suit I would imagine would happen here would be against the federal government, not against Twitter.

But yes, whether that happens, we will see. I am fairly hopeful.

Expand full comment

Your first problem will be to find a plaintiff with standing. Then a motion to dismiss, on the grounds that Twitter and not the government engaged in the activity in question.

It would be a long and expensive fight, in front of a hostile court. The suit against the DNC for rigging the 2016 primaries (promptly dismissed, with the dismissal upheld on appeal) shows how far courts will go to not rock the establishment boat is most instructive here.

Expand full comment

Alex Berenson has shown internal documentation from Twitter showing that they were under pressure from the White House to censor him before they finally did so. There is no reason to believe this kind of thing doesn't happen all the time. It is unquestionably illegal - for now.

Expand full comment

I really believe in freedom of speech, even for detestable speech, as long as it doesn't seem to be causing an immediate physical threat to others. Even for really rotten people. I would *rather* they be talking and communicating, rather than actually *doing* rotten things. And the more these sites get closed down the more they get built back up again in secret/encrypted ways, where we can't even monitor what the rotten people are up to.

Expand full comment
Comment deleted
Expand full comment

That standard makes revenge porn acceptable. The issue is that somebody is in that video and continued distribution directly impacts their life.

Expand full comment
Comment deleted
Expand full comment

Interesting. I certainly don't know the details, it sounds like you're much more familiar with the situation.

Expand full comment
Comment deleted
Expand full comment

That "transperson" was also apparently involved in a lot of dangerous if not outright illegal activity, which is why Kiwi Farms' attention was focused on that that person. Of course, you won't hear about that from the media, who absorbed that person's narrative wholesale with zero skepticism despite the fact that their narrative was contradicted by the police and despite little to no evidence that this wasn't all just a false flag attack to get KF shut down.

As you say, it's hard to talk about this given Freddie's rule. But this is NOT just a random Twitch streamer they decided to be mean to. This is a person who is alleged to be doing some very bad things.

And the flip side of your "it's harder for them to do it" is that it's also now harder for people to do citizen-journalist style investigations of people who, as X. points out above, are often terrible people themselves. KF is not tracking Wendy Carlos and Caitlyn Jenner, FFS.

Expand full comment
deletedSep 6, 2022·edited Sep 6, 2022
Comment deleted
Expand full comment

Pretty sure they did target Jenner (especially the timing of their announcement relative to the fatal car accident), but Jenner doesn’t give a shit, because they’re rich and already under paparazzi scrutiny.

FWIW, and I never had an account or saw what might have been discussed in private, but while doxing obviously runs the risk of assisting in-person harassment, I never saw anything remotely suggesting it or anything like it. Even direct contact was generally frowned upon, the whole “don’t touch poo” rule. That doesn’t mean it didn’t happen, but it definitely was not even the implicit purpose. 99% of it was reposting things that the person actually said with mockery, and then preserving them even when they became inconvenient. Which I would not enjoy either, but the option of just logging off is always there.

Expand full comment

From what I can tell, there were two posts with actual (but absurd) threats, at least one of them from an account created years ago with no other activity. In the context of other stuff on the site, maybe they rise to a marginally higher level of concern than a teenager trolling on any other anonymous message board, but there was certainly no reason to think that there was any true threat to life. And the quite reasonable possibility that they weren’t posted by the antagonist or an ally was nothing like ruled out. For all that people on KF are assholes, their most notorious targets are liars, opportunists, genuine grifters, harassers, and extremely ethically challenged if not outright criminal. They all wallow in the same terminally-online space of complete bullshit.

If you look closely at one of the suicides attributed to KF, this guy “Near”, you’ll realize it’s a comically-inept fake larded with melodrama. There’s no evidence that the guy, whoever he is, did any more than drop a random internet persona. You might as well mourn the death of a burner Twitter handle.

Expand full comment
Comment deleted
Expand full comment

I don’t know much about it, but what I’ve read seems to indicate that her problems at the time were not closely related to her KF thread, but rather with Brianna Wu.

Can’t vouch for the authenticity, blah blah, but: https://imgur.com/XnMDApB

This is Sagal on Wu:

“You are a fucking awful evil person, and you deserve to get murdered in the worst fucking way possible. And it's your own fucking fault. Murderers deserve to get murdered.”

I’m not sure how much causality you can draw about someone in that state.

Expand full comment

Hear hear Freddie. Spot on, sorry to say. Hate lives in the heart. Can’t police that. Yes, as you say, offer better alternatives. But sometimes, haters gonna hate. Alas.

Expand full comment

Wasn't the whole thing with Kiwi Farms that they weren't just talking about stuff, they were coordinating doxxing and swatting campaigns against specific individuals, leading to multiple suicides over the years?

I think you're right that it is in practice impossible to ban hateful communities from forming in the dusty corners of the internet. But it seems to me like it ought to be totally possible to annihilate hate communities foolish enough to expose themselves through high profile real-world action.

Expand full comment
author

But I could swat someone right now, if I really wanted to. What has been prevented?

Expand full comment
deletedSep 6, 2022·edited Sep 6, 2022
Comment deleted
Expand full comment
author

Nothing technologically about the demise of Kiki Farms makes it materially harder for me to do so. Does it?

Expand full comment
Comment deleted
Expand full comment
author

Great argument, there.

Expand full comment
deletedSep 6, 2022·edited Sep 6, 2022
Comment deleted
Expand full comment

Not technologically. But it does degrade a source of practical, communal knowledge about how swatting can be done. It also makes it more difficult to find the information necessary to execute a successful swatting attack against particular individuals (because as I understand it, kiwi farms was maintaining dox files on various internet people). Even if those files survive, I expect they have become more difficult to find, at least for the moment.

Also, the community that cheers this sort of thing on and rewards you with status for participating is being scattered such that the rewards of participation probably feel less meaningful than they did before, changing the overall risk/reward calculation that pushes people over the edge into actually attempting a swatting attack.

Expand full comment
Comment deleted
Expand full comment

Most people are lazy, stupid, and unmotivated. The fact that information is theoretically accessible does not mean that it is practically accessible. Kiwifarms was unusually prominent and open. That makes it massively more accessible for a typical person. When information is pushed into the "dark web," hidden behind evasive language, and generally made less user-friendly, access to it is degraded in a very practical sense.

You are making a common mistake that masquerades as a sort of earthy wisdom - you don't *want* something to be banned, so you construct a case for why it actually is impossible to ban. Pro-choice advocates often claim that anti-abortion laws don't reduce abortions. It is commonly claimed that prohibition didn't reduce alcohol consumption. Gun rights supporters routinely claim gun control can never work. All these claims are demonstrably false. There are more limited claims you can make that are true - regulation does not reduce the undesirable behavior to zero, and regulation also makes the behavior happen in riskier environments than it would otherwise. But on the basic level, regulation is generally effective at reducing the behavior it is aimed at.

Expand full comment

Let's lay it out clearly. I would predict that aggressively investigating swatting and doxxing incidents, punishing those involved, and shutting down any websites that seem involved, would lead to fewer swatting incidents, fewer doxxing incidents, and would save something like 1-2 lives per year (possibly a low rate of lives saved for the resources involved, but probably better than many current uses of law enforcement resources).

I would also suggest that aggressively shutting down sites that seem involved in such activities would deny them publicity and force them to operate more "under the radar," reducing their overall reach and influence. They would not disappear entirely, but their ability to project power into the real world would wane significantly.

It is totally possible these predictions are wrong, but they seem pretty reasonable to me. Do you disagree with these predictions?

Expand full comment
author

I think you just completely changed the topic in your first paragraph there. We're talking about the viability of shutting down Internet extremism, and you're engaging in a non sequitur about legal action. If anything, decentralizing the problem by getting rid of specfic sites would make such action harder!

Expand full comment

My understanding is we were talking about whether or not we ought to shut down websites like kiwi farms. That seems like the main object-level question here.

I think doing so is good, for the reasons I laid out above. If you agree with that, then I'm not sure why you wrote an article saying that shutting down kiwi farms was useless. Presumably though you disagree with me, so I'd be interested to hear you engage with the points I made and see where our disagreement lies.

Perhaps part of it is just a disagreement on objectives - I never really saw the point as being to "shut down internet extremism." To me the point is to minimize the reach of online extremism by degrading their recruitment opportunities and forcing them to keep their heads down (metaphorically). I don't really care if some bad folks gather online and talk about stuff, but I do care if they try and stifle the speech of others through techniques like doxxing and swatting. Decentralizing them is good in my book because it makes it harder for them to coordinate action on a large scale.

Expand full comment
author

How's that been going for you?

Expand full comment
Sep 6, 2022Liked by Freddie deBoer

It's a mixed bag? I think that pushing some of the worst parts of the internet into relative obscurity has saved lives relative to the counterfactual, and I think that the current efforts with kiwi farms are likely to save lives as well.

I also think that there is a downside to this, which is the way the tools of censorship can be aimed at legitimate discourse, but to me that is a separate question of whether the approach does the things it sets out to do. I tend to think on the whole censorship and deplatforming do achieve their aims, though never perfectly. For those of who favor free speech (I consider myself one of these people), it would be convenient if censorship was toothless, but as a factual matter I do think censorship is often able to achieve at least some of its goals.

Expand full comment

"If we save just one life....."

Expand full comment

Then 7,999,999,999 lives will have been lost. Was it worth it, I ask you?

Expand full comment

There are two issues with trying to "moderate" (read: censor) content:

(1) You pointed out this, but I'll go a step further and suggest that by "banning" these groups, you give them a legitimacy they did not have. People think that "deplatforming" people removes their legitimacy, but it doesn't. Among certain groups, if you're "deplatformed" that simply means that you are exactly what you portray yourself to be: the person that speaks the truth the amorphous they don't want the masses to hear. This is because . . .

(2) Censorship (even if it's given the harmless sounding title of "content moderation") has, no matter how well-intentioned it started out, always been turned on people who are actually speaking the truth the amorphous they don't want the masses to hear. In the case of the US, we started off by "deplatforming" Alex Jones, and the next thing you know we're kicking people off of Twitter for telling the truth about a laptop simply because it was a story that would have destroyed the media's favored presidential candidate and telling the truth everyone knew about masks, lockdowns, and the vaccines regarding COVID. The "truth" could sometimes be as simple as the government collected data that Twitter claimed someone might "misinterpret," which meant "interpret accurately but not in a way we want." And that is why people will default to the side of the moderated and not the moderators. We've seen this pattern endlessly repeated.

As I said, even when well-intentioned (and really, in the US, the intentions have never been anything but self-serving, so they're not even good), "moderation"/censorship will always and without exception be used as a political weapon or a weapon to control the masses for the interests of the elite. And anyone with a brain and any sense of self-preservation should demand that everyone be allowed to speak, no matter how vile what they say is because honestly the bigger threat is the "power of moderation" ending up in the wrong hands.

Expand full comment

Yes, they start by banning "harmful" and "dangerous" users (definitions one could argue Alex Jones fits). But then you see screenshots of the tweets that others were suspended or banned for, and you realise there are a LOT of a lot of people getting caught in that net.

Partly it's due to algorithmic fuckups (what the system has been programmed to flag), and partly due to mission creep (from banning those who make death threats to banning those spreading "disinformation", which we can all see is a VERY plastic term).

Expand full comment

Do let me know when the NYT and WaPo are going to get banned from Twitter. After all, they aggressively pushed disinformation about the existence of Iraqi WMDs, and that disinformation led to the deaths of millions of innocents, in a war whose effects are still being felt to this day.

And that is not their only, or even their most recent disinformation campaign.

Or is it, the way that they say "one man's terrorist is another man's freedom fighter" - disinformation is just a narrative that someone in power wants to suppress?

Expand full comment

"You either address the problem at its foundations by presenting more appealing alternatives or you don’t."

Well, they don't have any appealing alternative. Progressives can't actually answer the criticisms of the reactionaries because they don't understand what they are deconstructing and destroying. And if you can't win an argument, silencing is the only alternative.

Expand full comment

You think the governments of the world are run by progressives?

Expand full comment

Reactionaries don’t understand or Progressives don’t understand? Unclear.

Expand full comment

"Reactionaries don’t understand or Progressives don’t understand?

I'm going with both.

Expand full comment

I don't like calling these sites "hate speech". Whatever is on them, they are outside of the official narratives. This is their great sin.

The oligarchy doesn't want they narratives and mind control questioned by anyone. They hate memes, which can spread subversive thoughts easier than a 10,000 manifesto.

Just weigh the relative power of the banned and the ban imposers and decide if you want to support the oligarchy or those who oppose it.

Expand full comment
author

Well to begin with the term hate speech does not appear in this post

Expand full comment

It is your title.

Expand full comment
author

No, it literally is not "Hate speech" is something specific, a legal category in some countries. "Hate site" refers to the perception of those sites. Please, read carefully and speak accurately.

Expand full comment

Very much on board with the overall sentiment, but it's not quite this simple.

Obviously, the publication of some content has to be banned -- child porn, snuff films, personal bank account information, etc. And surely some examples of doxxing and harassment fall into that category. I’m not sure exactly where the red line is (and I don’t trust either law enforcement or public opinion to draw it), but there is one somewhere.

I do think it is true that left-ish / liberal attempts to Stop Internet Hate via aggressive censorship are both bad on the merits and doomed to fail. I personally think the red line of censorship should be way, way far out there — that we should allow for public discussion of all kinds of wacky, disturbing, offensive, stupid and dangerous ideas. But there’s gotta be regulation and enforcement at some point.

Is it a whack-a-mole game? Sure, but so is combating things like tax fraud and theft and pollution and murder. It’s not an existential fix; those don’t exist. It’s just part of the humdrum process of maintaining a society.

Expand full comment

Yeah, I mean, we have laws for a reason. Regardless of whether or not you can "stop murder" (answer: you cannot) murder should still be illegal.

Expand full comment
author

can you download a murder

Expand full comment

I was specifically responding to specific's comment where they outlined certain categories of "speech" that should be banned, such as child porn and snuff films.

Expand full comment

No. But you can facilitate one over the internet. Or over the telephone for that matter.

Expand full comment

How are they doomed to fail? They are working beyond anyone's wildest dreams. For the better part of two years you were removed from the de facto public square if you (to take two examples) questioned the wet market hypothesis, or questioned the efficacy of COVID vaccines in any way. What impact do you think this censorship had on the COVID discourse?

You're whistling past the graveyard. The techno-oligarchy is here and it's winning.

Expand full comment

The original post is about why such efforts are doomed to fail, so I won't restate the arguments it makes. As for COVID, I think that's a fine example of how attempts to forcefully suppress ideas and speech tend to backfire in this here supposedly "techno-oligarchy." Despite the ham-fisted efforts of Twitter et al, there is a community of millions who are loudly skeptical of COVID vaccines, shutdowns, masking, etc, and they're not being hauled off to the gulag. For better or worse, they're a major force in American politics. In Virginia, the backlash to COVID overreach undid two-plus decades of electoral trends and handed the governorship to a Republican. So, what impact has the censorship of the powers-that-be had on the COVID discourse? I think it's made it shittier all around -- more paranoid, more unhinged, more difficult to parse reality. But if the goal was to suppress counternarratives, it sure hasn't been effective.

Expand full comment

I would say you've failed to prove your premise, but you haven't even tried. In order to demonstrate that the suppression of counternarratives was ineffectual, you'd need counterexamples: what happens when such narratives are *not* suppressed?

To take one example, the wild allegation that Donald Trump was installed in the White House in 2016 due to Russia's hacking of the US election, and that he was a puppet of the Russian regime (Vladimir Putin, specifically), perhaps due to kompromat held over him by that country, and that he had in fact colluded with Russia to achieve high office... these paranoid, unhinged, not-at-all-moored-in-reality narratives were not suppressed. In fact, they were trumpeted as fact by much of this country's media and amplified by its social networks.

In order for your argument to make any sense, you would need to argue with a straight face that the Russia collusion narrative would have been *more* relevant, *more* consuming of the national conservation, *more* prevalent in polite company, *more* dominant in social media, were it censored (even ham-fistedly) by the techno-oligarchy?

I will put my cards on the table and say that that's complete rubbish and that no serious observer could possibly believe it to be true, which is why no serious observer has ever tried to deploy this counter-argument with a straight face.

You also play the age-old game of defending censorship based on its supposed lack of effect. Freddie once wrote a blog on this theme about cancel culture, an excellent one in fact, one which raises this single, towering question: if you don't think it works, why do you want to do it so badly?

Expand full comment

This is not at all true. Yes, plenty of people were "removed from the de facto public square" for questioning the efficacy of COVID vaccines. And yet, pretty much everyone in the (developed, at least) world heard about those questions and are aware of the existence of sizable groups and organisations pushing those questions. This is like the "help help I'm being silenced" of columnists in the largest newspapers or hosts on cable TV - there is opposition, much of it poorly justified and with plenty of negative side effects, but we endlessly hear their cries anyway.

Expand full comment

No, pretty much everyone didn't hear about them, and many of those that did were safely able to dismiss the questions because the questions came from kooks - and we know they were kooks because they were banned. The confines of safe discourse play to our innate desire to be part of the in-group, part of the community, not part of the outcast set. That you're retconning yourself as some kind of vaccine skeptic - you weren't, you believed in your heart of hearts that those who got the vaccine would not get COVID - doesn't change this fact; and that you like censorship but pretend it only affects the empowered doesn't affect it either. Plenty of small-time normies were caught in the crossfire. You can just ignore them because they're not in the papers or on TV.

Expand full comment

If Anglin or Jones were guilty of producing child porn they would be under arrest. The issue is that they producing material that is morally questionable but completely legal.

Expand full comment

For now.

Expand full comment

I think that the Democrats are about to see a serious reversal in November, so the impending crackdown will have to wait at least a little longer.

Expand full comment

Hard disagree on both counts, but time will tell.

Expand full comment

Seriously? The D's may hang on to the Senate but they are almost guaranteed to lose the House. Hence the reports that Pelosi has plans to retire to Italy after November.

Expand full comment

I don't think the 88 million people who voted for Joe Biden have all disappeared.

Expand full comment

Oh the other hand, Cloudflare does not have to host their website, just as Madison Square Garden can decline to host a Proud Boys convention.

Expand full comment

Cloudflare's response has always been that kicking anybody off their service is meaningless in the long run because they'll just be back online in a few weeks with a competing service. The Daily Stormer is getting 3 million unique visitors a month now so I think that prediction has been completely borne out.

Expand full comment

I mean, yes, but also, they have every right to do it.

Expand full comment

One thing to add: There are countries, such as China, that have more or less effectively controlled the internet; extremism, mainstream-ism, and everything in between. I'm not saying that we should do that in the US (please, let's not) but I am saying that it can be done via combinations of AI, cheap labor, tightly controlled apps, and authoritarian censorship policies. I'd imagine that as the world and the internet marches forward (or is it backward?) that China's ways of handling expression on the internet will become a kind of template. Here's to hoping that Western democracies can find a better way.

Expand full comment

I'd say give the Chinese people a little more time.

I don't mean that it's inevitable in overly-optimistic spirit of "information wants to be free", but I do think it's far too early to say that the story is over and Chinese censorship techniques have won forevermore. That's probably a mild-to-moderate strawman of your position too, but I find a lot of censorship triumphalism (I would not characterize your post as such!) tends to focus on very narrow time-frames or overly-specific "wins". For example, I've seen people argue that Milo Yiannopoulos' fate proves that de-platforming works.

Expand full comment

Kiwi Farm seems awful, but I am fundamentally uncomfortable with internet infrastructure companies banning sites hosting legal (or even legal gray area) content.

Once you are in the business of banning sites, you are in the business of banning sites and that is a sticky business. I don't think that there is a clear line (at least in a lot of people's minds) between KF and Truth Social, the GOP or even Substack (which has repeatedly come under criticism for hosting topics which shall-not-be-named here). It's one thing for Cloud Flare to say we are not in the business of banning sites. But once they do, they are going to be subject to pressure to ban a lot of other things. This includes Ted Cruz calling hauling them before a congressional committee to complain that they haven't also kicked BLM or whatnot off the internet.

Also, the fact that CloudFlare can kick KF off the internet suggests that it has something like monopoly power over internet access. If you have monopoly power, or near monopoly power over something as essential as internet access, I think you kinda have to act as a common carrier, and if you don't voluntarily, Congress should make you.

I'll add that some of the stories about what KF has done do seem pretty awful. The fix for it probably is to beef up/create laws relating to internet harassment, esp. where it spills over into real life, even if I do have some concern that Congress could draft such laws and/or that such laws would be enforced in non-troubling ways.

Expand full comment
Comment deleted
Expand full comment

If that's true, I'm less concerned. I was a little fuzzy on what exactly CloudFlare was doing, but it is being reported as if KF was somehow kicked off the internet by this.

Expand full comment
deletedSep 6, 2022·edited Sep 6, 2022
Comment deleted
Expand full comment

The fact that we have no effective network-level control over DDOS is a whole other problem. It shouldn’t be the case that every site exists subject to the whims of either literal criminals or those willing to employ them (DDOS attacks are illegal and use illegally-hacked computers), or to a handful of large DDOS mitigation services. The assumption that the latter will always be on “our” side is quite problematic, even if right now one thinks that is the case (which I don’t).

Expand full comment

Thanks for this explanation, very helpful.

Expand full comment

Thanks for this explanation, very helpful.

Expand full comment

I’m not especially interested in defending the “important work” of getting kf kicked off cloudflare, but there IS a clear line that can, currently, distinguish KF from eg truth social, namely that forum’s propensity to dox people -- actual doxxing with like home addresses and social security numbers. I’m not sure why Freddie didn’t include this, but Keffals started pushing on this a few weeks ago after she was swatted, which swatting originated from KF. They sent her local PD an email (naturally, they signed it with her deadname) purporting to be from her, that she was holding her mother hostage and going to kill her. She got woken up with an AR in her face. I’m not a lawyer, but that sounds like it probably involves some illegal activity -- filing false report or some such at the least. I’m under the impression there have been a good number of swattings originating from KF, and certainly hundreds of people (in particular a bunch of trans women of any minor prominence online ) have been doxxed in the real sense of that word. I’m not an expert though so I can’t point to a source on that tbf.

Again, my point is just to say there is a distinguishing line between kf and some of these other sites. I remain convinced that kf will pop up somewhere else eventually soon, just as 8kun survives.

Expand full comment

(Rereading the piece, Freddie spends like 2, maybe 4 sentences on the particulars here, so that would be “why he didn’t include” the particulars

Expand full comment

I think it's interesting that this site allied Marjorie Taylor Greene with trans-activists. That Venn diagram intersection has to be the tiniest ever. (evidently MTG was swatted twice - seems like we need some serious SWAT response reform).

Expand full comment

"address the problem ['the existence of extremist feelings within the populace'] at its foundations by presenting more appealing alternatives"

The word "appealing" is being made to bear more weight here than it is capable of. The problem with "extremist" views is not that they aren't "appealing." Such views are clearly powerfully appealing to some people. The problem (if there is a problem) with such views is that they are both false and dangerous - and all the more dangerous for being superficially "appealing."

As for "presenting an alternative" there is certainly no shortage of institutions and platforms delivering the official, mainstream narrative. But for some reason the "extremists" don't find that narrative appealing. They would rather have an alternative.

Expand full comment