The “most controversial issue by far,” Darmé told me, was how powerful the board should be. “People outside the company wanted the board to have as much authority as possible, to tie Facebook’s hands,” she said. Some wanted it to write all of the company’s policies. (“We actually tested that in simulation,” Darmé said. “People never actually wrote a policy.”) On the other hand, many employees wondered whether the board would make a decision that killed Facebook. I sometimes heard them ask one another, in nervous tones, “What if they get rid of the newsfeed?”
As a result, the board’s powers were limited. Currently, users can appeal cases in which Facebook has removed a post, called “take-downs,” but not those in which it has left one up, or “keep-ups.” The problem is that many of Facebook’s most pressing issues—conspiracy theories, disinformation, hate speech—involve keep-ups. As it stands, the board could become a forum for trolls and extremists who are angry about being censored. But if a user believes that the company should crack down on certain kinds of speech, she has no recourse. “This is a big change from what you promised,” Evelyn Douek, a Harvard graduate student who consulted with the team, fumed, during one meeting. “This is the opposite of what was promised.” Users also currently can’t appeal cases on such issues as political advertising, the company’s algorithms, or the deplatforming of users or group pages. The board can take cases on these matters, including keep-ups, only if they are referred by Facebook, a system that, Douek told me, “stacks the deck” in Facebook’s favor. (Facebook claims that it will be ready to allow users appeals on keep-ups by mid-2021, and hopes to eventually allow appeals on profiles, groups, and advertising, as well.)
Perhaps most important, the board’s rulings do not become Facebook policy, the way that a Supreme Court precedent becomes the law of the land. If the board decides that the company should restore a piece of content, Facebook is obligated to take down only that post; similar posts are taken down at Facebook’s discretion. (The company states that it will remove “identical posts with parallel context” based on its “technical and operational capacity.”) Policy recommendations are only advisory. This significantly narrows the board’s influence. Some hope that the recommendations will at least exert public pressure on the company. “Facebook undermines its goals and its own experiment if it restricts the impact of the board’s decisions or just ignores them,” Douek told me. Others felt let down. “It’s not what people told us they wanted,” Darmé said. “They wanted the board to have real power over the company.”
In August, 2019, the governance team met with advisers, over snacks and seltzer, and discussed who should sit on the board. A security guard stood outside, making sure that no one explored the offices unattended. (He stopped me on my way out and told me that I couldn’t leave without an escort.) The people selected for the board would determine its legitimacy, and how it ruled, but the experts had trouble agreeing on who could be trusted with this responsibility. One attendee suggested letting the first board members choose the rest, to preserve their independence from the company. Lauren Rivera, a professor at Northwestern’s business school, cautioned against this approach: “It’s empirically proven that when you have a group self-select, in the absence of any kind of guidance, they just pick more people that look like them.” The experts then began giving their own ideas. Journalists said that the board should be mostly journalists. International human-rights lawyers said that it should be all international human-rights lawyers. Information scientists said that it should be “anyone but lawyers.” A white man at a think tank said that it should be populated with “regular people.”
Ultimately, to select its would-be judges, Facebook opened a public portal, which received thousands of nominations. It got suggestions of candidates from political groups and civil-rights organizations. It also used its initial workshops to scout potential candidates and observe their behavior. “The thing about the global consultancy process is that it was also maybe Facebook’s first true global recruiting process,” Harris told me later. Jarvis, the journalism professor at the New York workshop, said, “It’s so Facebook of them.” He added, “They never called me. I wonder what I said.”
The number of people that Facebook planned to have on the board kept changing. I imagined the team sweating over a “Law & Order”-style corkboard of photographs. At one point, Kara Swisher, a tech journalist who has been critical of Facebook, nominated herself. “I would like to formally apply to be judge and jury over Mark Zuckerberg,” she wrote in the Times. Facebook didn’t take her up on it. A reporter sent me an encrypted text saying he had two sources telling him that Barack Obama would be on the board. When I asked Fariba Yassaee, who oversaw the search for members, about high-profile candidates, she smiled. “The people we’re looking at are incredibly impressive, but they also are able to do the hard work that being on the board will entail,” she said. “They need to be team players.” In May, the first board members were announced. They included Helle Thorning-Schmidt, a former Prime Minister of Denmark; Catalina Botero Marino, a former special rapporteur for freedom of expression to the Inter-American Commission on Human Rights; Alan Rusbridger, the former editor of the Guardian; and Tawakkol Karman, an activist who won the Nobel Peace Prize, in 2011, for her role in Yemen’s Arab Spring protests.
The slate was immediately controversial. Some employees were angry about the appointment of Michael McConnell, a retired federal judge appointed by George W. Bush. In 2000, McConnell argued before the Supreme Court that the Boy Scouts should be allowed to exclude gay people. (This year, during a Zoom class at Stanford Law School, he recited a quote that included the N-word. He defended this as a “pedagogical choice,” but pledged not to use the word again.) “We all knew what people outside and inside the company were expecting: board members who respect all people and all cultures, including respect for L.G.B.T.Q. rights,” Darmé, who had since left Facebook, told me. “Can you really have someone on the board who’s argued something like this all the way to the highest court in the land?” Others believed that, considering that half of the country is Republican, disregarding such views would be undemocratic. “It is not a thing you can really say right now, but the vast majority of the world is much more ideologically conservative than Menlo Park,” Harris said. “How do you reflect that on the board? Or do you decide, No, we’re just not going to have that?”
People familiar with the process told me that some Republicans were upset about what they perceived to be the board’s liberal slant. In the months leading up to the appointments, conservative groups pushed the company to make the board more sympathetic to Trump. They suggested their own lists of candidates, which sometimes included members of the President’s family, most notably Ivanka and the President’s sons. “The idea was, either fill this board with Trump-supporting conservatives or kill it,” one person familiar with the process said. In early May, shortly after the board members were announced, Trump personally called Zuckerberg to say that he was unhappy with the makeup of the board. He was especially angry about the selection of Pamela Karlan, a Stanford Law professor who had testified against him during his first impeachment. “He used Pam as an example of how the board was this deeply offensive thing to him,” the person familiar with the process said. Zuckerberg listened, and then told Trump that the members had been chosen based on their qualifications. Despite the pressure from Trump, Facebook did not change the composition of the board. (Trump declined to comment.)
Several candidates declined to be considered. Jameel Jaffer, the director of the Knight First Amendment Institute, told me, “I was worried, and still am, that Facebook will use membership on the board as a way of co-opting advocates and academics who would otherwise be more critical of the company.” But others saw it as a way to push Facebook in the right direction. Julie Owono, a board member and the head of Internet Sans Frontières, told me, “I had expressed interest in joining the board because I feel, and still do, that Facebook is doing a terrible job on hate speech in environments that are already very tense.” Thorning-Schmidt, the former Prime Minister of Denmark, told me, “I needed to know this would be independent from Facebook and that Facebook would commit to following our decisions.” She met with Zuckerberg and asked that he give his word: “I had to hear it from Mark Zuckerberg myself. And he said yes.”
Critics of the board believe that it will prove to be little more than a distraction. “I think it’s a gigantic waste of time and money,” Julie Cohen, a law professor at Georgetown, said. She believes that its star-studded panel and lavish funding will prevent regulation while allowing the company to outsource controversial decisions. And, since it can currently rule only on individual posts, the board can’t address Facebook’s most fundamental problems. In mid-May, for example, a video called “Plandemic,” which claimed that vaccine companies had created COVID-19 in order to profit from the pandemic, went viral on the platform. It was taken down within a few days, but by that time it had already been seen by 1.8 million people. Ellen P. Goodman, a law professor at Rutgers, believes that Facebook needs to add more friction to the circulation of content; anything catching fire, she said, should be subject to a “virality disruptor” that stops further spread until the content has been reviewed. Zephyr Teachout, a law professor at Fordham, says that the company should do away with targeted advertising, which incentivizes the promotion of incendiary, attention-grabbing posts. “If the core of our communications infrastructure is driven by targeted ads, we will have a toxic, conflict-driven communications sphere,” she said. She also argues that the company is too big and needs to be broken up through antitrust litigation.
This summer, I spoke with Zuckerberg over Zoom. He wore a Patagonia fleece and sat in a wood-panelled room in front of a large marble fireplace. He had been heavily involved in the board’s creation: editing documents, reading memos, reviewing possible members. “I don’t see any path for the company ever getting out of the business of having to make these judgments,” he told me. “But I do think that we can have additional oversight and additional institutions involved.” He hoped, he said, that the board would “hold us accountable for making sure that we actually get the decisions right and have a mechanism for overturning them when we don’t.”
He looked tired. He seemed more at ease talking about “product” or “building tools” than he did discussing ethics or politics. It struck me that he was essentially a coder who had found himself managing the world’s marketplace of ideas. “The core job of what we do is building products that help people connect and communicate,” he said. “It’s actually quite different from the work of governing a community.” He hoped to separate these jobs: there would be groups of people who built apps and products, and others—including Facebook’s policy team and now the board—who deliberated the thorny questions that came along with them. I brought up a speech he gave at Georgetown, in 2019, in which he noted that the board was personally important to him, because it helped him feel that, when he eventually left, he would be leaving the company in safe hands. “One day, I’m not going to be running the company,” he told me. “I would like to not be in the position, long term, of choosing between someone who either is more aligned with my moral view and values, or actually is more aligned with being able to build high-quality products.”
I asked what kinds of cases he hopes the board will take. “If I was them, I’d be wary of choosing something that was so charged right off the bat that it was immediately going to polarize the whole board, and people’s perception of the board, and society,” he told me. He knew that critics wished the board had more power: “This is certainly a big experiment. It’s certainly not as broad as everyone would like it to be, upfront, but I think there’s a path for getting there.” But he rejected the notion that it was a fig leaf. “I’m not setting this up to take pressure off me or the company in the near term,” he said. “The reason that I’m doing this is that I think, over the long term, if we build up a structure that people can trust, then that can help create legitimacy and create real oversight. But I think there is a real risk, if it gets too polarized too quickly, that it will never be able to blossom into that.”
In April, 2020, the board members met for the first time, over Zoom. Facebook employees cried and took a screenshot. “It was such a profound experience to see this thing take on a life of its own,” Heather Moore, who came to Facebook from a U.S. Attorney’s office, said. After that, board members attended training sessions, which included icebreakers and trust exercises; in one, they brought pictures that represented pivotal moments in their lives. “Whether we can get along well enough to disagree and stay on mission is crucial and quite unknown,” John Samples, a member who works at the Cato Institute, a libertarian think tank, told me. The group quickly came under intense public pressure to stand up to the company. In June, a nonprofit called Accountable Tech began targeting the board on Facebook with ads that included their photos and addressed them by name: “Pam Karlan: speak up or step down”; “Tell Michael McConnell: don’t be complicit.” Members often felt the need to assert their independence. The company assigned readings, some of which were, according to a board member, “just P.R. crap from Facebook,” and employees sat in on early meetings and mock deliberations. “We’re out of our mind if we’re in an oversight position and the people who are teaching us about what we’re overseeing are the people we’re meant to oversee,” the board member said. After complaints, Facebook employees stopped being invited to the meetings.
In October, Facebook began allowing appeals from a random five per cent of users, like a new Instagram feature, and the board’s jurisdiction was rolled out over the next month. Its docket included a post from an American user about Joseph Goebbels, the Nazi minister of propaganda, and one from a user in Myanmar claiming that there is “something wrong with Muslims psychologically.” Owono told me, “I never imagined I’d have to ask myself these kinds of hard questions so rapidly.” They reviewed the company’s Community Standards, a ten-thousand-word document that codifies Facebook’s speech policies, and consulted precedents in international human-rights law. One debate that has arisen among board members mirrors the division on the Supreme Court between “textualist” and “living” interpretations of the Constitution. Some believe that their job is to hew more closely to Facebook’s policies. “Our job is to ask, ‘What does the text mean?’ ” one member told me. “We don’t have much legitimacy if we just start making stuff up.” Others believe that they should use their power to push back against Facebook’s policies when they are harmful. Nicolas Suzor, a law professor from Australia, and a board member, told me, “I was worried we’d end up with decisions that were limited to the facts, but people are brave.”
In one of the board’s first cases, a user had posted photos and described them as showing churches in Baku, Azerbaijan, that had been razed as part of the ongoing persecution of Armenians in the region. He complained about “Azerbaijani aggression” and “vandalism,” and referred to Azerbaijanis using the word “taziki,” which literally means “washbowls” but is a play on a Russian slur. Facebook had taken down the post as hate speech, but some board members felt that it was strange to apply this rule to a complaint against a dominant group. The panel asked for a report from UNESCO, received a comment from the U.N. special rapporteur on minority issues, and another from a think tank in Ukraine, who told them that persecuted groups often used offensive language in their struggle for equality. “We learned that, during a conflict, it’s usually accepted that people would use harsh words, so there’s this idea that, especially when minority rights are at risk, there’s a custom to allow more harsh discourse,” a board member told me. “I’d never heard of that before, and I found it compelling.” In the end, they voted to take the post down, though not everyone agreed. The opinion suggested that a minority of the members “believed that Facebook’s action did not meet international standards and was not proportionate,” and that the company “should have considered other enforcement measures besides removal.”
In another case, someone in France had posted a video and accompanying text complaining that the government had refused to authorize a combination of azithromycin and hydroxychloroquine, an anti-malarial drug, as a treatment for COVID-19. Many on the right, including Trump and the French professor Didier Raoult, have claimed that hydroxychloroquine cures the illness, though the claim has been debunked, and scientists have warned that the medication can cause dangerous side effects. The user claimed that “Raoult’s cure” was being used elsewhere to save lives and posted the video in a public group with five hundred thousand members. Facebook worried that it might cause people to self-medicate, and removed it. According to one person on the board, members of the panel “who have lived in places that have had a lot of disinformation in terms of COVID-19” agreed with this decision, believing that, “in the midst of this huge pandemic affecting the entire world population, decisive measures may be adopted.” But others noted that the post was pressing for a policy change, and worried about censoring political discussions. “No matter how controversial it would seem to us, those questions and challenges are what helps scientific knowledge advance,” the board member said. They found that Facebook’s standard for censoring such speech, interpreted under international human-rights law, involved determining whether it was likely to incite direct harm. Because the combination of medicines was not available over the counter in France, they decided that the risk of causing people to self-administer was low. They voted to restore the post but encouraged the company to append a link to more reliable scientific information.
When the board was just three weeks old, Black Lives Matter protests were sweeping the country, and Trump posted on both Facebook and Twitter threatening to send in the military to subdue them, writing, “When the looting starts, the shooting starts.” His language echoed that of the segregationist George Wallace, who threatened civil-rights protesters in similar terms. Twitter flagged the tweet as violating its rules against “glorifying violence,” but Facebook left it unmarked. Zuckerberg released a statement saying, “I disagree strongly with how the President spoke about this, but I believe people should be able to see this for themselves.” In an interview on Fox News, he noted that he didn’t think the company should be the “arbiter of truth” on political issues. Angry employees staged a virtual walkout and raised the idea, in a leaked Q. & A., of letting the board hear the case. A few days after the incident, Suzor, the Australian law professor, suggested a full-board meeting. Users couldn’t appeal Facebook’s decision to the board—it hadn’t yet started taking cases, and the post was a keep-up—but it debated the issue nonetheless.
Several members were shocked by Trump’s threats and initially wanted to meet with Zuckerberg or release a statement condemning the platform’s decision. “I was furious about Zuck’s ‘arbiter of truth’ double-down,” one board member told me. Others felt that taking a partisan stand would alienate half the country and lose the board legitimacy. “Seventy-five million people voted for Trump,” Samples said. “What are you going to do about it?” The group discussed whether it should weigh in on matters outside its remit that are nevertheless of public importance. Jamal Greene, one of the co-chairs, told me, “The general sentiment was ‘no’ for right now, and maybe ‘no’ ever, but certainly not before we’re even doing the thing that we’re supposed to be doing.” After two hours of discussion, the members decided to stay mum. “Moralistic ranting is not going to make a difference,” Samples said. “Building up an institution that can slowly answer the hard questions? That might.”
They didn’t have much time for institution-building. On January 6th, a group of Trump supporters who disputed the results of the Presidential election stormed the Capitol, taking selfies, making threats, and attempting to disrupt the peaceful transition of power. Trump had urged on the mob by repeatedly claiming, on Facebook and elsewhere, that the election had been stolen from him. Hundreds of thousands of people had used the site to spread the claim, and to organize the rally at the Capitol. Afterward, Trump released a video tepidly disavowing violence and reiterating his claims of a fraudulent election. He tweeted, “These are the things and events that happen when a sacred landslide election victory is so unceremoniously & viciously stripped away from great patriots who have been badly & unfairly treated for so long.” Facebook removed two of Trump’s posts. The next morning, in a statement from his own Facebook, Zuckerberg announced an indefinite suspension of Trump’s account. “In this moment, the risk to our democracy was too big,” Sheryl Sandberg said, in an interview. “We felt we had to take the unprecedented step of what is an indefinite ban, and I’m glad we did.” The next day, Twitter permanently banned him.
Many felt that the decision was an important step. “The platforms failed in regulating the accounts, of course, since he was inciting violence, and they banned him for that only after egregious violence resulted,” Susan Benesch, the founding director of the Dangerous Speech Project, told me. “But banning him did lower his megaphone. It disrupted his ties with his large audience.” Others expressed concern that Facebook had wielded its power to silence a democratically elected leader. “The fifth most valuable corporation in the U.S., worth over seven hundred billion dollars, a near monopoly in its market niche, has restricted a political figure’s speech to his thirty million followers,” Eugene Volokh, a law professor at U.C.L.A., said. “Maybe that’s just fine. Maybe it’s even a public service. But it’s a remarkable power for any entity, public or private, to have.” Angela Merkel, the Chancellor of Germany, described Trump’s removal from Twitter as “problematic,” and Alexey Navalny, the Russian opposition leader, tweeted, “I think that the ban of Donald Trump on Twitter is an unacceptable act of censorship.”