The Twitter Files and the Sinister Label of "Misinformation"
Various thoughts on the revelations
On learning what we “already knew”
Despite the virality of the Twitter Files all over the platform, you’ve probably heard from one media personality or another that “we already knew this”. In fact, you might have even heard it from me, when the Twitter Files #1 was released.
Why, then, is something we “already knew” making the news? One possibility is that the outraged weren’t paying attention–they did not read the various pieces over the years which alluded to Twitter’s shady practices, go through Twitter’s terms of service, or were trusting enough believe the public denials by Twitter executives. Now, they are being spoon-fed the information that was really always there, and imagining they are learning something new.
This perspective isn’t entirely wrong, but it is missing the point. Yes, “we already knew this”, much in the same way we “already knew” the government was spying on us prior to the NSA revelations. There is an enormous difference between presuming, vaguely, and knowing, precisely. It is true that Twitter had indicated in its terms of service and elsewhere that they have the right to limit visibility, but they also made contradictory/misleading statements widely. For example, Twitter executives explicitly stated that they do not “shadowban” and “certainly don’t shadow ban based on political viewpoints or ideology, but the Twitter Files #2 revealed that that is exactly what they were doing—and not just to anonymous trolls. Stanford’s Jay Bhattacharya, for example, was placed on a “trends blacklist” the day he joined the site, presumably due to his association with an anti-lockdown open letter.
There is a wide gulf between that which “everyone knows” and that which “everyone knows that everyone knows”.
There are many common beliefs that are, for various reasons, unsayable. Perhaps they are unkind, perhaps they are uncouth, perhaps there is political pressure to deny them. Many of us who delve into contentious topics experienced strange behavior on the platform. Privately, we discussed the suppression, theorizing on the mechanisms and which tweet might have triggered it. Few would speak about this experience publicly, however, fearing (justifiably) that it only made them appear paranoid.
In the famed parable, “everyone knew” that the emperor was wearing no clothes. But such private knowledge is nearly useless in political discourse—what is unsayable is also unactionable. No intellectual who cares about their social status can refer to the implications of a nudist emperor, no theorist can devise ways to adequately clothe future royalty, no politician can regulate against indecent exposure. When the child said it out loud, however—the spell was broken. Everyone knew that everyone else also knew, and that is the point when private knowledge became public knowledge. This will be the legacy of the Twitter Files.
It is now public knowledge that Twitter was/is actively suppressing accounts, for reasons that are often ideological. The question now becomes: how should we treat this information?
Here is a question: Why call something “misinformation” when you can call it a lie?
Or, perhaps it isn’t a lie, exactly, but definitely a falsehood. So why not use that instead?
The answer is that “lie” and “falsehood” are inflexible—either something is true, or it isn’t. Misinformation, however, doesn’t have to be a lie or a falsehood. The truth can be misinformation too—provided it is “misleading”. In other words, that it might lead one to a conclusion that some authority believes is wrong to believe (either factually, morally, or in some other sense).
For example, for a time the hypothesis that COVID-19 was accidentally leaked from a lab was widely decried as a crackpot conspiracy theory. As we already “knew” the theory was crazy, no debate was necessary—and indeed, could be harmful as it might “mislead” people into a bad (and politically difficult) conclusion. Therefore, social media companies began actively suppressing discussion of the issue on their platforms.
Luckily, they were not entirely effective—the debate continued despite initial setbacks, and now what we “know” about COVID origins is in dispute. The lab leak hypothesis is no longer considered—even by the federal government—as off the table. If, in the end, the lab leak turns out to be the real story, what would that say about the censors? Were they protecting us against “misinfo”, or were they purveyors of “misinfo” themselves?
This is the sinister circularity of “misinformation”: If any idea that might lead others into disagreeing with orthodoxy is by definition a misleading idea, and thus “misinformation” and subject to censorship, what happens when consensus is wrong? How can we even begin the process of correcting false conclusions?
More broadly, the concept reveals the attitude the managerial class has towards the people and our capacities for reason. They have expanded which kind of information is deemed a public hazard—first it was outright lies and propaganda, now it can even be the truth (when not properly sifted, curated, explained by the right people with the right ideas). It is not enough to merely fact-check claims, they must also be “contextualized”, lest someone come to a “bad” conclusion.
Is it any wonder that trust in our experts and authoritative bodies is falling off a cliff?
The problem is the system, not a system.
It is good that Elon is “freeing the bird”—I am optimistic that in the near future, Twitter will better represent discourse as it truly is. But the problems that plagued Twitter were not unique to it—every single organization is susceptible to the same authoritarian impulses. Even Twitter had aspired to be the “free-speech wing of the free-speech party”, but it fell far short of its goals (as has Facebook, and as will any future social media platform). It is a mistake to view the pattern as a series of personal failings—the problem is structural, and so must be the solution.
An eccentric billionaire is no match against a thousand captured institutions—what we need isn’t the right guy at the top, but the right set of incentives across the board. In the case of social media platforms, that means thinking about how we might encourage the protection of user expression, without adding regulation that hampers the autonomy of the company.
For now, we should at least insist on transparency: the surreptitious suppression social media companies are engaging in is more dangerous than explicit, defined censorship. What we think others are thinking has an enormous impact on how we behave ourselves, which in itself “forms” the opinions of others. We deserve to know, explicitly, when these platforms are expressing the “voice of the people”, and when they are expressing the voice of the company.
It honestly never occurred to me that when the child points out the emperor has no clothes, everyone else in the crowd would immediately call his nakedness a nothingburger, because "duh everyone already knew that". I doubt Hans Christian Andersen could have seen that one coming either.
As for Musk moving the needle, I think Twitter was a significant accelerant of institutional capture. There's a reason Bari Weiss said when she quit the New York Times that Twitter had become the unofficial editor-in-chief of the newsroom. Twitter was also where cancel culture mobs generally did their crowdsourcing/recruiting. Take woke Twitter out of the equation and the pressure could very well ease on other institutions.
Speaking of lies or misinformation, one of the best of Sam Harris’arguments - I think - was “why hasn’t 4 chan become the public square?”. The fascinating subject for me is that being First amendment absolutists doesn’t solve the issue. I have no answer in my pocket, besides the fact that no opinion should be censored, nor people be persecuted for it. But decisions still need to be taken and principles being cleared if Twitter has decided now to ban Alex Jones and Kanye West. (Sorry if I moved away from Sarah’s post.)