Numerous false and biased pieces of content circulate on social media. But top-down fact-checkers won’t solve the problem. The better alternative is collective intelligence.
Traditional media have been in a crisis of trust for some time. More and more readers and viewers are turning away from them. However, the reaction to this is rarely self-critical – instead, the established media often deny any responsibility for this development.
This lack of self-reflection was exemplified in the recent campaign for the SRG initiative. Posters portrayed, among others, X CEO Elon Musk as a scapegoat – a symbol of the “dangerous” misinformation circulating on social media. The message is clear: on one side, the supposed fake news generators of social media; on the other, the established media – especially the SRG – as reliable and trustworthy sources of information.
The debate surrounding fake news has intensified significantly since 2016. The election of Donald Trump and Brexit acted as a wake-up call for the political and media establishment in the West. For years, this political-media complex claimed the authority to interpret key political issues and shaped the narratives in public discourse—for example, on climate change, migration, and the euro. But this authority began to crumble. Consequently, the focus of the debate has shifted in both academic and political circles. Journalists, politicians, and intellectuals are increasingly urging social media platforms to actively intervene in public discourse through comprehensive content moderation.
Regina Rini, Professor of Philosophy at York University in Toronto, is calling on Facebook to forward potentially misleading content to “independent fact-checkers” and to label such posts as “disputed.” She further recommends the use of “reputation scores.” These scores would track how frequently users share content that later proves to be false. The ratings would not be publicly accessible to avoid stigmatization, but could be used algorithmically to selectively limit the reach of sources deemed unreliable.
From their perspective, such measures are necessary because people have limited cognitive and temporal resources. It is therefore individually rational to weigh information heuristically—that is, to use mental shortcuts—and to give the benefit of the doubt to those with similar political views. What appears rational on an individual level, however, turns into a dysfunction when applied collectively: the collective search for truth is undermined—and thus the foundations of democracy are weakened.
Added to this is the logic of social media. It often remains unclear whether sharing a post, for example through a repost, implies agreement with its content. This ambiguity can be exploited opportunistically, making responsibility for the truthfulness of information increasingly diffuse. From Rini’s perspective, appeals to media literacy and critical thinking fall short under these conditions. No one can personally verify every single piece of content on social media. Therefore, she calls for the establishment of an “infrastructure of accountability” on digital platforms: False content should have consequences – for example, through fact-checking and reputation scores.
Referee of Truth
What appears at first glance to be plausible and harmless in the arguments of Regina Rini and other proponents of combating fake news has far-reaching ethical consequences. Who actually determines what counts as fact? Institutional measures against misinformation presuppose that it is possible to objectively and impartially define what is false. Only such a premise, if one follows Rini’s line of reasoning, could justify further regulatory interventions. Which institution can assume the role of a neutral arbiter of truth? Every institution that makes this claim creates a powerful instrument of political influence.
Fact-checkers often present themselves as scientifically sound and neutral. However, a closer look at their methods reveals a less than convincing picture. Communication researchers at the University of Madrid found in a study that the corresponding verification protocols are frequently vaguely formulated and applied inconsistently. According to the study, fact-checkers evaluate individual statements on social media less based on a clearly standardized procedure and more according to journalistic judgment. The search for unequivocally objective arbiters thus proves to be a structural problem.
“Fact-checkers like to present themselves as scientifically sound and neutral. However, a closer look at their methods reveals a rather poor picture.”
Michael Shellenberger, who co-authored the “Twitter Files,” demonstrated, using the US as an example, the extent of political influence on fact-checking organizations. There, a far-reaching network of government agencies, non-governmental organizations, and academic institutions has emerged, claiming the authority to define the term “disinformation.” Shellenberger refers to this as the “censorship-industrial complex” —a reference to Dwight Eisenhower, who warned of the “military-industrial complex” in his farewell address as US president. Fact-checking serves not only to educate but also to safeguard political interests against the uncertainties of democratic decision-making processes.
It’s not far to censorship.
The idea that chosen organizations determine what is right or wrong deeply infringes upon the fundamental right to freedom of expression. The public sphere is the heart of every democracy. Freedom of speech also protects false, disturbing, and unpopular views. This comprehensive protection is precisely what is intended to ensure that truth can prevail in the open competition of ideas. Since every fact-checking is inevitably politically influenced—because in almost all cases, facts in the media are interpreted, and other facts are omitted or even unknown—interventions against “fake news” carry the risk of suppressing other interpretations and thus gradually restricting freedom of expression. Accordingly, the hurdles for infringements on freedom of speech are high in established democracies.
The example of France shows how quickly the fight against disinformation can morph into authoritarian legislation. Since 2018, laws have been passed there that aim at far-reaching regulation of online communication. This represents a massive intervention by the government in public discourse, justified by the paternalistic assumption that the state must guarantee the “cognitive safety” of its citizens.
Meanwhile, a “censorship-industrial complex” has also emerged in France. Here, too, a thicket of state agencies, the judiciary, and non-governmental organizations has grown, which put pressure on social media platforms using political and legal means.
The “Twitter Files” offer insight into the workings of this censorship architecture. Using Twitter (now X) as an example, they expose the platform’s internal processes. It is documented that government officials repeatedly pressured social media companies to suppress undesirable content and users. For instance, high-ranking officials in the Biden administration pushed for the censorship of certain content related to Covid-19.
Human autonomy is being undermined.
Those who call for institutional measures against fake news justify this with the aim of protecting citizens from manipulative influences. However, when institutions pre-select the flow of information, they tacitly assume that citizens are incapable of assessing the credibility of statements on the internet themselves. Their capacity for judgment is denied. In this way, citizens are – intentionally or unintentionally – infantilized.
“Intellectual freedom arises only where there is access to a multitude of perspectives.”
Furthermore, the power to pre-filter information allows institutions to significantly influence decision-making processes, as the aforementioned examples from America and France demonstrate. Independent judgment is then only possible to a limited extent. Underlying this is a worldview in which the public is susceptible to manipulation and therefore requires a kind of guardianship by selected elites. In short: expertocracy instead of democracy.
The ideal of liberal democracies is the free formation of opinions by responsible citizens. This means that citizens are allowed to examine their own information, weigh different viewpoints against each other, and even risk making incorrect judgments. Intellectual freedom only arises where there is access to a multitude of perspectives.
Institutional filtering mechanisms counteract this principle. They replace independent evaluation with the targeted selection and compilation of content. This eliminates the basis for independent judgment: individuals can only make decisions based on pre-selected information.
A cautionary example is China’s social credit system. It demonstrates how data-driven systems can be used to deliberately manipulate citizens’ behavior. Conformity is rewarded, deviation punished. As soon as the state begins to control public discourse—ostensibly to protect its citizens—it opens the door to comprehensive societal control. Undermining people’s autonomy weakens their capacity for independent thought.
Collective intelligence is the key.
What is the alternative to a top-down approach to combating fake news? Numerous pieces of content circulate on social media that turn out to be erroneous. However, the path of censorship that fact-checking organizations ultimately take cannot be the right answer.
X, much criticized by mainstream media, offers a different approach. Instead of operating top-down and centrally like traditional fact-checking systems, it relies on the opposite: bottom-up and decentralized. Specifically, it uses “Community Notes,” an open-source fact-checking system that aims to function without censorship or one-sided political influence. Potentially misleading posts are not deleted, but rather supplemented with additional context. Users can write explanatory notes on a post, which are then evaluated by a so-called “bridging algorithm.”
This means that a comment only becomes visible to everyone once it has been unanimously rated as helpful by users from different ideological camps – who were often divided in previous reviews. This consensus-based approach aims to reduce one-sided political biases and enable users to form their own informed opinions without being patronized.
The effectiveness of this approach became evident in early 2026 when the US abducted Venezuelan President Nicolás Maduro to New York. Euphoria quickly spread across social media – including X. Numerous videos purportedly showed crowds of people in Venezuela cheering in the streets, celebrating the end of Maduro’s tyranny. However, many of these recordings were misleading or simply false. Time and again, old videos or videos taken out of context were presented as current events – for example, footage from earlier protests or even entirely different events abroad. This is precisely where Community Notes came in: users added context to such posts, indicating that they did not depict current events in Venezuela. In this way, misinformation could be visibly corrected – not by deletion, but by contextualization.
That’s precisely the key. The American podcaster Joe Rogan once put it perfectly: “The best way to combat falsehoods is with accurate ones.” And these arise where as many people as possible collectively search for the truth and contribute their perspectives. The real strength lies in this collective debate – the power of collective intelligence.

