In a 2024 ruling by Brazil’s Supreme Federal Court, Justice Alexandre de Moraes ordered the suspension of the X social media platform. In his long-running confrontation with X’s owner Elon Musk over compliance, de Moraes argued that repeated failures to obey court orders represented a direct challenge to Brazil’s legal authority.
Government interventions like this one—as well as in the European Union, Australia, India, Singapore, and elsewhere—are a response to growing public pressure, drawing courts and policymakers deeper into efforts to address the issue of misinformation on social media platforms.
Around the world, governments are grappling with a question that seems deceptively simple: How should those platforms be regulated? At the heart of the debate is a fundamental tradeoff—whether to rely on an imperfect “marketplace of ideas” to sort truth from falsehood, or to empower governments to step in and establish guardrails.
Around the world, governments are grappling with a question that seems deceptively simple: How should social media platforms be regulated?
“The marketplace of ideas is certainly flawed,” said UVA Professor of Law Kevin Cope. “Too often, it produces vitriol, gaslighting, and other misinformation rather than truth or good-faith discourse.”
Government regulation also has its downsides, including the risk of censorship and incentives for platforms to remove lawful speech in order to avoid liability.
“In my research, misinformation in Brazil is deeply tied to longstanding inequalities, weak institutional trust, and the uneven presence of the state,” said UVA media studies professor David Nemer, a faculty co-lead of the Digital Technology for Democracy Lab. “People are not simply consuming falsehoods. They are navigating information ecosystems shaped by a justified distrust of institutions that have historically failed them.”
Critics warn that even well-intentioned policies can drift toward censorship if safeguards are weak or enforcement becomes politicized.
This dynamic is not limited to autocracies or fragile democracies. Established democracies also face similar tensions when crafting and enforcing speech regulations. Germany’s Network Enforcement Act, for example, was presented as a response to hate speech, extremism, and “fake news.”
“The [Network Enforcement Act] forces platforms to remove content unlawful under German criminal law, which critics argue creates incentives for over-removal,” said Cope. “Over the last several years, democracies and autocracies around the world have enacted internet misinformation laws, nominally for public safety, but at their worst, they give the state power to decide what’s true in real time, especially about the government itself.”
Public opinion globally reflects this tension. “In Brazil, misinformation often spreads through highly intimate, encrypted platforms like WhatsApp and Telegram,” said Nemer. “It circulates through family networks and neighborhood ties, making it far more difficult to track and regulate, because it is embedded in trusted relationships rather than public-facing posts.”
In the United States, 29 percent of adults use WhatsApp, according to Pew Research, compared with about 90 percent in Brazil. This means Americans are more likely to encounter information on large public platforms rather than in private family chats. “But that does not make it less dangerous,” Nemer said.
The underlying issue, in large part, is not just technological but democratic. “Misinformation is not an isolated aberration of the internet. It is a symptom of broader democratic fragility,” Nemer added.
“Misinformation is not an isolated aberration of the internet. It is a symptom of broader democratic fragility.”
That raises a central question about the appropriate level of government intervention. “The difficulty with laws banning misinformation is that ‘misinformation’ is rarely a consensus legal term,” said Cope. “And depending on who’s interpreting it, it can include accidental error, opinion, satire, contested science, or dissent from official narratives.”
This tension between the need to act and the risk of overreach reflects a deeper concern about misinformation itself.
“It helps when policies are precise, transparent, and centered on platform accountability rather than simply punishing users,” Nemer said. “It is especially important when governments require more transparency in recommendation systems, political advertising, and coordinated influence operations.”
At the same time, there are real risks. “Intervention backfires when it becomes too broad, too opaque, or too concentrated in the hands of the state without democratic safeguards,” Nemer cautioned, noting a risk of increased distrust. “If regulation is perceived as censorship, it can reinforce narratives of persecution and illegitimacy that authoritarian actors thrive on.”
“Intervention backfires when it becomes too broad, too opaque, or too concentrated in the hands of the state without democratic safeguards.”
This ambiguity helps explain why misinformation cannot be addressed as an issue that is purely technical or about messaging.
“There is often an assumption that misinformation is just a content problem,” Nemer said. “But it is tied to deeper social fractures—inequality, political exclusion, religious authority, and economic precarity.”
There is also a powerful economic dimension at play. “Platforms are designed to reward outrage and emotional intensity,” Nemer added. “Misinformation is not just a problem of speech. It is also a business model.”
For policymakers, this raises a difficult challenge: how to address misinformation without undermining democratic values like free expression and open debate. According to Nemer, one of the biggest blind spots is a misunderstanding of how people actually encounter information.
“People do not engage with information as isolated individuals rationally evaluating detached facts,” he said. “They encounter it through relationships, routines, and identities. It’s not simply that harmful content exists but that platforms actively organize visibility and circulation in ways that privilege emotionally charged material.”
Any lasting solution will need to account for that reality, recognizing that misinformation is not just about content, but about the systems and incentives that shape how messages are understood and shared.