The indictment of 13 Russian nationals for meddling in the 2016 U.S. election has reignited the debate on whether Facebook and other social-media giants are a threat to democracy. While the electoral impact of Russian trolls who amplified anti–Hillary Clinton memes can be disputed, there’s a strong argument that social media has had a toxic effect on American society, driving polarization and creating paranoia. The wildfire spread of conspiracy theories that grieving high school students in Parkland, Florida, are “crisis actors” is only the latest example of how social media can poison public discourse.
The corrosive effect of social media on democratic life has led both French President Emmanuel Macron and Canadian Prime Minister Justin Trudeau to make the same threat to Facebook: self-regulate or be regulated. Last month, Macron proposed a new law to accomplish as much. “When fake news are spread, it will be possible to go to a judge … and if appropriate, have content taken down, user accounts deleted and ultimately websites blocked,” Macron said. “Platforms will have more transparency obligations regarding sponsored content to make public the identity of sponsors and of those who control them, but also limits on the amounts that can be used to sponsor this content.”
Macron’s idea is promising, but falls short. If fake news truly poses a crisis for democracy, then it calls for a radical response. Instead of merely requiring greater transparency of social media and empowering the courts to ban users and website—the latter being a slow, time-consuming, and ultimately Sisyphean solution—perhaps governments should outright ban Facebook and other platforms ahead of elections.
A model for this already exists. Many countries have election silence laws, which limit or prohibit political campaigning for varying periods of time ranging from election day alone to as early as three days before the election. What if these laws were applied to social media? What if you weren’t allowed to post anything political on Facebook in the two weeks before an election?
In 2017, Facebook experimented with flagging fake-news items, but abandoned the idea in December. “Academic research on correcting misinformation has shown that putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs—the opposite effect to what we intended,” according to Product Manager Tessa Lyons. Even YouTube’s decision to remove the video smearing Marjory Stoneman Douglas High School student David Hogg has a downside because, CNN’s Brian Stelter argues, “the notion that a giant corporation took down the video plays right into the hands of conspiracy-mongers who say they’re being censored.”
Facebook, Twitter, and YouTube have the same core business model: to monetize content that its users provide for free. The more users they have, the more content they publish, the more money they make—mainly by selling ads against its users, and selling data about its users. This make companies structurally disinclined to self-regulation impossible. As former Facebook employee Sandy Parakilas wrote in The New York Times: “What I saw from the inside was a company that prioritized data collection from its users over protecting them from abuse … Lawmakers shouldn’t allow Facebook to regulate itself. Because it won’t.”
In effect, the social-media behemoths have all the power of legacy media establishments, but few of the responsibilities. As Vox’ Matthew Yglesias noted:
But Facebook has already made clear that it will not act as a “responsible steward of the news ecology.” In 2016, for instance, the company replaced the human editors of its Trending Topics module with robots—and fake news promptly began trending.
Since Facebook refuses to self-regulate when it comes to fake news, the only proper response is government regulation.
Applying election-silence laws to social media where fake news thrives would, in many countries, simply be an extension of a pre-existing legal framework. Macron’s campaign was hacked shortly before the presidential election last year, but, thanks to France’s election-silence law, it didn’t have anywhere near the impact of the hack of the Democratic National Committee during the 2016 election in the U.S. As CNBC reports:
In France Saturday, there is near silence about 9 gigabytes of leaked documents from the campaign of presidential candidate Emmanuel Macron.
The collection of emails, spending spreadsheets, and more, appeared on the internet Friday night. Yet Saturday morning, there is absolutely nothing on French TV or radio, and very little on the websites of major newspapers.
This is due to a French law that says the day before an election should be a day of reflection. Starting at midnight Saturday and continuing until the polls close Sunday, campaigning is prohibited along with any kind of speech meant to influence the race. Hence the silence.
Here’s what an election-silence law for social media might look like: Facebook would be required to prohibit users from posting any links to stories about the candidates in the two weeks before an election.
This would still allow fake news to flourish for most of the campaign, of course, but the candidates have enough time to respond to fake news that spreads weeks or months before an election. The challenge is to prevent last-minute lies that could sway enough voters to swing an election result—which, in the United States, can be a matter only of thousands or even hundreds of votes.
This also would be technically difficult. But Facebook already has the means to screen out material it doesn’t want users to see. If you post a picture of a nipple on Facebook, it’ll get taken down very quickly. Given Facebook’s algorithmic expertise and its thousands of human moderators, it shouldn’t be too difficult to block any content featuring the name of a well-known political figure.
But what if there’s news during those final two week that people need to know? What if the Democratic nominee is revealed to be an undocumented immigrant, or President Donald Trump is indicted for colluding with Russia? Don’t Facebook users deserve to know about this?
Any such breaking news will be widely covered by existing media—television, radio, and newspapers. These media already give the news that the public needs and are much better about screening out lies, whether because of regulations (in the case of radio and newspapers) or traditions of responsibility (in the case of most print media). A social-media election silence law would have the karmic side benefit of driving Facebook users back to these news organizations, who have lost untold business thanks to the social-media platform.
There are other holes to poke in my proposal, like the fact that tens of millions of Americans vote early for president rather than on election day. And what to do about Twitter and other social-media sites? But the biggest problem is a legal one: The U.S. courts are more protective of free speech than European courts, and in recent years have been hostile to election regulations, notably in the Citizens United decision of 2010. There would also probably no shortage of public outrage, with some claiming that the government is trying to censor certain political beliefs. (After Macron’s proposal, for instance, the far-right nationalist Marine Le Pen tweeted, “Is France still a democracy if it muzzles its citizens?”)
Still, this is a debate worth having, even if only to move the Overton window by a smidge. Unregulated media has been a powerful force in America since 1986 when Ronald Reagan rescinded the “fairness doctrine,” which required broadcasters to provide balanced viewpoints. That opened the door for Fox News, Rush Limbaugh, and the asymmetric polarization of today’s politics. An election-silence law for social media wouldn’t solve this problem, but debating one could steer the conversation in a constructive direction. America needs a fairness doctrine for the digital age, one that’s not concerned with partisan balance but fact versus fiction.