Since 2004, when Mark Zuckerberg created Facebook in his dorm room at Harvard, he has been the one person ultimately responsible, or culpable, for every tough decision the company has made. In the early years, many of those tough decisions—whether to expand access beyond college students, whether to add a “like” button—were relatively trivial. As the company kept growing, though, the challenges became both more pressing and more grave: how to purge terrorists and child-traffickers from the platform; when to censor medical misinformation and white supremacy; whether tyrannical heads of state should be allowed to use Facebook to manipulate elections, issue death threats, or foment genocide. The whole time, the buck stopped with Zuckerberg, a talented coder but not, by his own admission, an expert in human rights or global governance.
Throughout Donald Trump’s rise as a professional social-media troll and would-be autocrat, Facebook seemed continually wrong-footed by his misbehavior. Trump kept bending or breaking Facebook’s rules, and Facebook, either out of principle or perceived self-interest, kept exhibiting an obvious reluctance to sanction him. Trump, of course, proceeded according to toddler logic, ignoring what Facebook said and instead responding to what it did, which was very little. When Facebook was asked to explain itself, its responses ranged from opacity to baffling incoherence. It sometimes alluded to a “newsworthiness exemption,” implying that speech by political figures was inherently newsworthy, even if the same speech would have been removed had a normal user posted it. Then, oxymoronically, it claimed that no Facebook user, not even a politician, was above the platform’s rules—a position that the company has reiterated many times, but that it has often honored in theory rather than in practice.
In 2019, Facebook put a hundred and thirty million dollars into a trust, establishing the Oversight Board, commonly known as the Supreme Court of Facebook. After some fits and starts, the board issued its first batch of decisions earlier this year. The Oversight Board is supposed to be independent, or, at least, as independent as any entity can be from the company that funded the trust that pays its bills. Its rulings are meant to be binding and unappealable, although, like real courts, it has no enforcement mechanism; its written opinions also include recommendations and “advisory statements,” which are nonbinding. As of now, its twenty members include a former Prime Minister of Denmark, a Yemeni Nobel Peace laureate, and five Americans, three of whom are law professors and two of whom represent nongovernmental organizations with classically civil-libertarian views about free expression. As with any business decision made by a Silicon Valley behemoth, explanations regarding why Facebook set up the Oversight Board range from idealistic to cynical. All we can know for sure is that it’s worth something to Zuckerberg—a hundred and thirty million dollars of his company’s money, to be precise—to arrange things so that the buck stops somewhere else.
On January 7th, the day after the assault on the Capitol, Donald Trump was temporarily banned from Facebook. In a post announcing the decision, Zuckerberg explained the reason for the ban—Trump’s “use of our platform to incite violent insurrection against a democratically elected government”—and added that Trump’s account would be locked “indefinitely and for at least the next two weeks until the peaceful transition of power is complete.” Two weeks later, Joe Biden was inaugurated; but Facebook, instead of either unlocking Trump’s account or making his suspension permanent, passed the buck to the Oversight Board. “We believe our decision was necessary and right,” Nick Clegg, the company’s vice-president of global affairs, wrote in a press release. But beliefs are not permanent actions. “In addition to the board’s determination on whether to uphold or overturn the indefinite suspension,” Clegg continued, “Facebook welcomes any observations or recommendations from the board around suspensions when the user is a political leader.” The word choices seemed to indicate what Facebook wanted: a binding “determination” on the call they were eager not to make, regarding Trump’s account, and nonbinding “recommendations” on everything else.
On Wednesday morning, the board issued its ruling, by far the most consequential and contentious of its brief tenure. The opinion was unsigned, and nearly twelve thousand words long; it included plenty of recommendations, but it refused to rule on whether Trump’s Facebook account should be permanently suspended. “The Board has upheld Facebook’s decision,” the opinion began, and this was the simplistic framing that dominated most of the push notifications and other immediate news coverage. But after that opening clause came thousands of words of stinging rebuke. The board was only upholding Facebook’s “time-bound suspension,” which was already over. It did not approve of Facebook’s “indeterminate and standardless penalty of indefinite suspension,” an ad-hoc sanction that did not correspond to any provision in Facebook’s rules. Moreover, the opinion held that “in applying a vague, standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities. The Board declines Facebook’s request and insists that Facebook apply and justify a defined penalty.” In other words, Facebook has six months to get its act together and make a coherent decision, at which point the case may well be referred back to the Oversight Board. Shortly after the ruling came down, I got a text from the Trump campaign (“Facebook ban continues! NONSENSE”) and an e-mail from a prominent group of journalists and Big Tech skeptics that calls itself the Real Facebook Oversight Board (“Fake Supreme Court Takes Victory Lap for ‘Upholding Ban of Trump’ While Punting Real Decisions Back to Facebook; World’s Most Obvious Content Moderation Decision Still Pending”). The Oversight Board may be independent from Facebook, but the two entities apparently share at least this much: a knack for rendering their most high-profile decisions in a hair-splitting manner that seems designed to satisfy essentially no one.
The board’s remit, according to a Web site whose muted colors and geometric abstractions call to mind the wall art in a boutique hotel in Stockholm, is to “review a select number of highly emblematic cases and determine if decisions were made in accordance with Facebook’s stated values and policies.” Whether intentionally or not, that “stated” is doing a lot of work. When it comes to its own values and policies, Facebook has always spoken out of two sides of its mouth. In public, the company has insisted that it wouldn’t warp or dilute its rules for anyone, not even a head of state. In private, it has repeatedly done the opposite. In a recent letter to the Oversight Board, Facebook claimed that it “has never applied the newsworthiness allowance to content posted by the Trump Facebook page or Instagram account.” This is not credible—if Trump’s repeated use of hate speech, election disinformation, and threats to nuke North Korea were not allowed to stand because they were newsworthy, then why were they allowed to stand?—but it’s not clear what anyone can do about it. “The lack of transparency regarding these decision-making processes appears to contribute to perceptions that the company may be unduly influenced by political or commercial considerations,” the board noted. This is quite an understatement; but it is merely one of several pieces of nonbinding advice that Facebook is unlikely to follow. In six months, or someday thereafter, we’ll find out whether Trump will be allowed to return to Facebook or whether he’ll be confined to his own sad personal blog. But Trump, despite his obvious importance, is ultimately a sideshow. The main problem with Facebook has always been Facebook.