Trump’s Been Unplugged. Now What?

For around a decade, a meme has circulated on social media depicting a youngish white man in a shirt and tie, frantically gesturing toward a wall covered in paper ephemera—envelopes, handwritten notes—connected by red string. The image, a still from a 2008 episode of “It’s Always Sunny in Philadelphia,” is often used as a joke to imply the presence of conspiracy thinking; it’s popular on Twitter, where the paranoid style thrives. In a Twitter timeline, information is abundant but almost always incomplete, conflict is incentivized, context is flattened, built-in speed offers a sense of momentum. It seems fitting that a common storytelling form is a sequence of linked tweets known as a thread: the service is electric with the sensation, if not always the reality, of connecting the dots.

Last week, on Wednesday, January 6th, a mob of Trump supporters descended on the Capitol. Some carried assault weapons and zip ties; all claimed that the 2020 Presidential election had been stolen—a conspiracy theory of the highest order. The President had stoked and amplified this delusion via Twitter, and, even after the mob had smashed its way into the Capitol, he tweeted encouragement, calling the rioters “great patriots” and telling them, in a video, “We love you. You’re very special.” Twitter blocked a few of these tweets and, by Friday, had permanently suspended his personal Twitter account, @realDonaldTrump. The President’s tweeting was “highly likely to encourage and inspire people to replicate the criminal acts at the U.S. Capitol,” the company , in a blog post. It noted that plans for additional violence—including a “proposed secondary attack” on the Capitol and various state capitols—were already in circulation on the platform.

Although Twitter has been an undeniable force throughout the Trump Presidency—a vehicle for policy announcements, personal fury, targeted harassment, and clumsy winks to an eager base—most Americans don’t use it. According to , only around twenty per cent of American adults have accounts, and just ten per cent of Twitter users are responsible for eighty per cent of its content. In many ways, it’s a niche platform: two days before the Capitol riots, a trending topic on the site concerned the ethically correct way to teach a child to open a can of beans. Still, Trump’s tweets, reproduced on television and reprinted in newspapers, are inextricable from his identity as a politician. His suspension from Twitter, moreover, has turned out to be just one in a series of blunt actions taken against him by tech companies. Following a commitment to crack down on claims of voter fraud, YouTube removed a video of Trump addressing the supporters who had gathered last Wednesday at the Capitol; it has since suspended Trump’s channel, for at least a week. Through an update on his personal Facebook page—an odd stream of corporate announcements, family photographs, and coolly impersonal personal musings—Mark Zuckerberg informed the public that Trump’s accounts would be suspended until at least after the Inauguration. Facebook has also committed to removing all instances of the phrase “stop the steal,” which has been taken up by conspiracists challenging the results of the Presidential election, from its service. Both YouTube and Facebook, where extremist content flourishes, have more than three times Twitter’s audience among American adults.

By Saturday, most major tech companies had announced some form of action in regard to Trump. The President’s accounts were suspended on the streaming platform Twitch, and on Snapchat, a photo-sharing app. Shopify, an e-commerce platform, terminated two online stores selling Trump merchandise, citing the President’s endorsement of last Wednesday’s violence as a violation of its terms of service. PayPal shut down an account that was fund-raising for participants of the Capitol riot. Google and Apple removed Parler, a Twitter alternative used by many right-wing extremists, from their respective app stores, making new sign-ups nearly impossible. Then Amazon Web Services—a cloud-infrastructure system that provides essential scaffolding for companies and organizations such as Netflix, Slack, NASA, and the C.I.A.—suspended Parler’s account, rendering the service inoperable.

These actions immediately activated conspiratorial interpretations. Was this a coördinated hit from Big Tech? How long had it been in the works? Did tech companies, known for their surveillance capacities, have intelligence about the future that the public did not? In all likelihood, the real story doesn’t involve a wall of crisscrossing red strings—just a red line, freshly drawn. It seemed that tech corporations were motivated by the violence, proximity, and unequivocal symbolism of the attack—and that the response, prompt and decisive, was a spontaneous, context-based reaction to threats that had been simmering on their platforms for years. The action was compensatory rather than cumulative—a way of curtailing, if not preventing, further harm. It was compounded by the cascade effect: each suspension or ban contributed to the image of Trump as a pariah, and put pressure on other companies to follow suit, which in turn diminished the repercussions those companies would likely face for their decisions. Last week may simply have been a breaking point, a moment at which the potential damage to American democracy, security, and business had become impossible to ignore.

The vacuum created by Trump’s absence on social media is now filled with questions and counterfactuals. The conversation is consistent only in its uncertainty. Why did things have to reach a point of extremity before the tech companies took action? What would’ve happened if they hadn’t acted? Are these decisions durable, and will they be repeated? Was this a turning point? Will it change the Internet, and if so, how?

Still, the deplatforming of an American President marks a turn in the relationship between the tech industry and the public. It adds a new layer to the ongoing discourse about content moderation on social networks—a conversation which, especially in recent years, has been dominated by fruitless, misdirected, and disingenuous debates over free speech and censorship. In the United States, online speech is governed by Section 230 of the Communications Decency Act, a piece of legislation passed in 1996 that grants Internet companies immunity from liability for user-generated content. Most public argument about moderation elides the fact that Section 230 was intended to encourage tech companies to cull and restrict content. But moderation is complex and costly, and it is inherently political. Most companies have developed policies that are reactive rather than proactive. Many of the largest digital platforms have terms-of-service agreements that are constantly evolving and content policies that are enforced unevenly and in self-contradictory ways. Twitter and Facebook are especially infamous for their inconsistency. Even as Trump’s rhetoric has intensified—and even as his followers have engaged in increasingly alarming and violent behavior—the largest social networks had braided together explanations for keeping his accounts active.

The movement to deplatform Trump highlights central, often-overlooked issues within the Section 230 debate, and offers a novel case study. It also raises more questions: What if the platforms had taken content moderation more seriously from their inception? What if they had operated under different business models, with different incentives? What if they had priorities other than scaling rapidly and monetizing engagement? What if the social-media and ad-tech industries had been regulated all this time, or Section 230 had been thoughtfully and meaningfully amended?

Keep Reading