But this policy assumes people do not have agency, and it stifles debate on important issues.
Discord recently announced an update to its terms of service that prohibits “false or misleading health information that is likely to result in harm.”
But this policy assumes people do not have agency, and it stifles debate on important issues.
This is disappointing in part because Discord has largely remained decentralized, allowing users to form and regulate private servers, and has stayed out of meddling in what users can and cannot say except for broad, less-intrusive rules.
I’m in charge of moderating Out of Frame’s Discord server, and these rules put us in an awkward position. To comply and keep Discord from banning our server, we must play the role of justices of the Supreme Court, interpreting passages such as:
”Discord users also may not post or promote content that attempts to sway opinion through the use of sensationalized, alarmist, or hyperbolic language, or any content that repeats widely-debunked health claims, unsubstantiated rumors, or conspiratorial narratives.”
and
“We allow the sharing of personal health experiences; opinions and commentary (so long as such views are based in fact and will not lead to harm); good-faith discussions about medical science and research […]”
Not only do these rules include numerous terms that are subject to interpretation (conspiratorial, good-faith, alarmist) and that would be ambiguous enough to enforce fairly if they didn’t require moderators to be experts in the current scientific consensus regarding any particular medical issue. But they also require us to know the unknowable. No one can be galaxy-brained enough to predict the future and calculate all the possible consequences of a piece of information being distributed. Not users, not moderators, not algorithms, or anything else can know for a fact whether a concept will “cause harm.”
This is an illustration of the fact that central planning is futile because of the complexity of reality. Facts are complicated, unclear, and constantly being discovered. This applies with regards to supply and demand in the economy, as Ludwig von Mises wrote in Economic Calculation in the Socialist Commonwealth, and it also applies with regards to the “market of ideas.”
This approach to speech, under which we must determine whether an idea “causes harm” before we can discuss it, is straight out of the safetyist hell of Demolition Man or The Giver. It would destroy the purpose of conversation if it were applied to every issue. Any social or political idea worth having is bound to harm someone. To advocate against sugar tariffs harms domestic sugar producers who benefit from them. To advocate for tariffs harms everyone who has to buy sugar. The purpose of discussion is to determine what the harms and benefits of ideas are, and whether those harms and benefits are acceptable. Whether something “causes harm” cannot be known in advance.
Discord justifies its policy by saying that the messages it aims to prohibit “can prompt individuals to make unsafe choices.” But this assumes that people do not have agency. Human beings are not automatons. Information doesn’t force people to do things, they choose to do things based on information.
Ideas do not directly cause harm, and in the sense that they indirectly cause harm, it is because of the choices of human beings, not because of the concepts themselves.
Besides “harm,” falseness is the other purported criterion for so-called misinformation. But we aren’t talking about concepts that can be labeled false with a low degree of ambiguity, like the earth being flat. We’re talking about a novel pandemic that remains a hot topic of debate and about which new facts continue to come out.
And the techno-authoritarian blob has never enforced these rules in an objective and unbiased way. They ban pro-Russia propaganda while excusing pro-Ukraine propaganda. They censored claims that COVID came from a lab (which turned out to be plausible), while also censoring people who questioned CDC guidance (some of which turned out to be false). Fact-checkers labeled defenses of Kyle Rittenhouse false while the media defamed him. It is banned to say ivermectin treats COVID, but it is legal to call it a “horse de-wormer” (The drug also is prescribed for humans, so this makes about as much sense as ridiculing penicillin as “fungus”). The official sources they use to determine truth have lied or repeated lies about every issue from inflation to Jussie Smollett.
Discord claims “this policy is not intended to be punitive of polarizing or controversial viewpoints.” But this is about as meaningful as writing “no copyright infringement intended” in the description of a pirated film posted to YouTube. Regardless of the company’s claimed intent, the consequence of its policy is to strangle political discussion on one of the most controversial political issues of the current day.
Banning so-called misinformation means that, for example, debates on vaccines can allow the pro-vaccine side, but not the other side. That is not much of a debate. But it goes beyond that. Ostensibly, people can still oppose vaccine mandates, for example, but “misinformation” creates a shroud that means they can only express certain kinds of opposition. You can oppose mandates on the grounds of bodily autonomy or because Joe Biden imposed them by executive order, but not because you think the vaccine itself is bad. I support the vaccine, but a reason many people oppose mandates is that they believe the vaccines are unsafe or ineffective.
Placing this handicap on only one side of an issue is a naked attempt to restrict political conversation, and contentious issues like these are the most important for us to be free to discuss.
This article originally published in Fee.