For around a decade, a meme has circulated on social media depicting a youngish white man in a shirt and tie, frantically gesturing toward a wall covered in paper ephemera—envelopes, handwritten notes—connected by red string. The image, a still from a 2008 episode of “It’s Always Sunny in Philadelphia,” is often used as a joke to imply the presence of conspiracy thinking; it’s popular on Twitter, where the paranoid style thrives. In a Twitter timeline, information is abundant but almost always incomplete, conflict is incentivized, context is flattened, built-in speed offers a sense of momentum. It seems fitting that a common storytelling form is a sequence of linked tweets known as a thread: the service is electric with the sensation, if not always the reality, of connecting the dots.
Last week, on Wednesday, January 6th, a mob of Trump supporters descended on the Capitol. Some carried assault weapons and zip ties; all claimed that the 2020 Presidential election had been stolen—a conspiracy theory of the highest order. The President had stoked and amplified this delusion via Twitter, and, even after the mob had smashed its way into the Capitol, he tweeted encouragement, calling the rioters “great patriots” and telling them, in a video, “We love you. You’re very special.” Twitter blocked a few of these tweets and, by Friday, had permanently suspended his personal Twitter account, @realDonaldTrump. The President’s tweeting was “highly likely to encourage and inspire people to replicate the criminal acts at the U.S. Capitol,” the company stated, in a blog post. It noted that plans for additional violence—including a “proposed secondary attack” on the Capitol and various state capitols—were already in circulation on the platform.
Following the suspension, Twitter was flooded with outrage, joking, and speculation. On my own feed, people mourned the loss of the demented and absurd posts by Trump that predated his time in office. (“Sorry folks, I’m just not a fan of sharks—and don’t worry, they will be around long after we are gone,” he had tweeted, in 2013.) A few suggested that Trump start a Substack. Some wondered whether the move might set a precedent for the deplatforming of marginalized groups. Others pointed out that some sex workers, pro-Palestinian activists and journalists, and Black Lives Matter supporters had already been booted from the service. “Wish I could see his tweets about getting kicked off twitter . . . like when someone dies and you have the urge to call them to tell them the news,” one friend tweeted, with weird poignancy. For a brief period, Trump attempted to assume the controls of various accounts manned by associates; Twitter swiftly removed those tweets. All the while, people kept asking questions. Was this the free market in action or was it corporate tyranny? Was it a good idea? What took Twitter so long? Hillary Clinton retweeted a tweet of her own, from 2016, in which she had called on Trump to delete his account; in her retweet, she added a check-mark emoji.
Although Twitter has been an undeniable force throughout the Trump Presidency—a vehicle for policy announcements, personal fury, targeted harassment, and clumsy winks to an eager base—most Americans don’t use it. According to Pew Research, only around twenty per cent of American adults have accounts, and just ten per cent of Twitter users are responsible for eighty per cent of its content. In many ways, it’s a niche platform: two days before the Capitol riots, a trending topic on the site concerned the ethically correct way to teach a child to open a can of beans. Still, Trump’s tweets, reproduced on television and reprinted in newspapers, are inextricable from his identity as a politician. His suspension from Twitter, moreover, has turned out to be just one in a series of blunt actions taken against him by tech companies. Following a commitment to crack down on claims of voter fraud, YouTube removed a video of Trump addressing the supporters who had gathered last Wednesday at the Capitol; it has since suspended Trump’s channel, for at least a week. Through an update on his personal Facebook page—an odd stream of corporate announcements, family photographs, and coolly impersonal personal musings—Mark Zuckerberg informed the public that Trump’s accounts would be suspended until at least after the Inauguration. Facebook has also committed to removing all instances of the phrase “stop the steal,” which has been taken up by conspiracists challenging the results of the Presidential election, from its service. Both YouTube and Facebook, where extremist content flourishes, have more than three times Twitter’s audience among American adults.
By Saturday, most major tech companies had announced some form of action in regard to Trump. The President’s accounts were suspended on the streaming platform Twitch, and on Snapchat, a photo-sharing app. Shopify, an e-commerce platform, terminated two online stores selling Trump merchandise, citing the President’s endorsement of last Wednesday’s violence as a violation of its terms of service. PayPal shut down an account that was fund-raising for participants of the Capitol riot. Google and Apple removed Parler, a Twitter alternative used by many right-wing extremists, from their respective app stores, making new sign-ups nearly impossible. Then Amazon Web Services—a cloud-infrastructure system that provides essential scaffolding for companies and organizations such as Netflix, Slack, NASA, and the C.I.A.—suspended Parler’s account, rendering the service inoperable.
These actions immediately activated conspiratorial interpretations. Was this a coördinated hit from Big Tech? How long had it been in the works? Did tech companies, known for their surveillance capacities, have intelligence about the future that the public did not? In all likelihood, the real story doesn’t involve a wall of crisscrossing red strings—just a red line, freshly drawn. It seemed that tech corporations were motivated by the violence, proximity, and unequivocal symbolism of the attack—and that the response, prompt and decisive, was a spontaneous, context-based reaction to threats that had been simmering on their platforms for years. The action was compensatory rather than cumulative—a way of curtailing, if not preventing, further harm. It was compounded by the cascade effect: each suspension or ban contributed to the image of Trump as a pariah, and put pressure on other companies to follow suit, which in turn diminished the repercussions those companies would likely face for their decisions. Last week may simply have been a breaking point, a moment at which the potential damage to American democracy, security, and business had become impossible to ignore.
The vacuum created by Trump’s absence on social media is now filled with questions and counterfactuals. The conversation is consistent only in its uncertainty. Why did things have to reach a point of extremity before the tech companies took action? What would’ve happened if they hadn’t acted? Are these decisions durable, and will they be repeated? Was this a turning point? Will it change the Internet, and if so, how?
Generally speaking, deplatforming works: it diminishes a voice, a movement, or a message, and arrests its reach. But Trump’s ejection from corporate tech platforms—a public event enacted by private companies—is an unusual form of the practice. A robust and powerful communications apparatus remains at the President’s disposal. The incitements embedded in his tweets were materialized in last week’s Capitol invasion. They have been echoed by the hundred and forty-seven Republican lawmakers who voted to overturn the election results, and are ingrained in coverage on Fox News, on talk radio, and in right-wing publications. (Although this, too, may be changing: Fox News declared Biden the winner on November 7th, and, on Friday, Inside Music Media reported that Cumulus Media, an Atlanta-based company that owns four hundred and sixteen radio stations and employs a number of popular conservative and right-wing talk radio hosts, had instructed its on-air talent to stop promoting the stolen-election narrative.) Trump’s followers and supporters retain their ideological and political beliefs, and are likely to organize and act accordingly; in many cases, moves to deplatform the President will only strengthen these commitments.
Still, the deplatforming of an American President marks a turn in the relationship between the tech industry and the public. It adds a new layer to the ongoing discourse about content moderation on social networks—a conversation which, especially in recent years, has been dominated by fruitless, misdirected, and disingenuous debates over free speech and censorship. In the United States, online speech is governed by Section 230 of the Communications Decency Act, a piece of legislation passed in 1996 that grants Internet companies immunity from liability for user-generated content. Most public argument about moderation elides the fact that Section 230 was intended to encourage tech companies to cull and restrict content. But moderation is complex and costly, and it is inherently political. Most companies have developed policies that are reactive rather than proactive. Many of the largest digital platforms have terms-of-service agreements that are constantly evolving and content policies that are enforced unevenly and in self-contradictory ways. Twitter and Facebook are especially infamous for their inconsistency. Even as Trump’s rhetoric has intensified—and even as his followers have engaged in increasingly alarming and violent behavior—the largest social networks had braided together explanations for keeping his accounts active.
There are no easy answers to questions of platform governance, and the political environment has generated conversations that are tangled, trapped, and circuitous—a ball of knots. Despite a bounty of rich and nuanced scholarship on the topic, recent discourse around Section 230—including at the governmental level—has focussed on culture-war priorities. In fact, the law interacts with many other issues, including the social costs of engagement-driven (and ad-supported) business models and the design and intentions of algorithmic recommendation systems. And there’s the matter of monopoly power: perhaps Trump’s social-media exile would be less important if the digital landscape weren’t dominated by a handful of corporations. (One might argue that Facebook’s huge scale is a reason why conspiratorial and extremist content has been able to spread so efficiently.) Finally, there’s the unique leverage that tech workers have when they choose to engage in collective action. Twitter’s decision to ban Trump came after hundreds of employees signed a letter calling on executives to act. Employees at Google, where some full-time and contract workers recently formed a so-called minority union, have been putting pressure on YouTube. The combination of monopoly power and worker power can have striking effects.
The movement to deplatform Trump highlights central, often-overlooked issues within the Section 230 debate, and offers a novel case study. It also raises more questions: What if the platforms had taken content moderation more seriously from their inception? What if they had operated under different business models, with different incentives? What if they had priorities other than scaling rapidly and monetizing engagement? What if the social-media and ad-tech industries had been regulated all this time, or Section 230 had been thoughtfully and meaningfully amended?