Looks like the machines aren’t ready to take over just yet.
The COVID-19 pandemic affected Facebook’s ability to remove harmful and forbidden material from its platforms, the company said Tuesday. Sending its content moderators to work from home in March amid the pandemic led the company to remove less harmful material from Facebook and Instagram around suicide, self-injury, child nudity, and sexual exploitation.
Sending its human reviewers home meant that Facebook relied more on technology, rather than people, to find posts, photos, and other content that violates its rules.
“Today’s report shows the impact of COVID-19 on our content moderation and demonstrates that, while our technology for identifying and removing violating content is improving, there will continue to be areas where we rely on people to both review content and train our technology,” Guy Rosen, Facebook’s vice president of integrity, wrote in a blog post.
The company said Tuesday that it has since brought many reviewers back to working online from home and, “where it is safe,” a smaller number into offices.
But Facebook also said its systems have gotten better at proactively detecting hate speech, meaning it is found and removed before anyone sees it. The company said its detection rate increased 6 points in the second quarter, to 95 percent from 89 percent. Facebook said it took action on 22.5 million pieces of content — like posts, photos or videos — for hate speech violations in the second quarter, up from 9.6 million in the first quarter. The social network said that’s because it has expanded its automation technology into Spanish, Arabic and Indonesian and made improvements to its English detection technology.
Facebook also announced Tuesday that it is banning caricatures of Black people in the form of blackface, as well as dehumanising depictions of Jewish people that include images or other depictions of Jewish people running the world or controlling major institutions such as media networks, the economy or the government.
In the Netherlands and Belgium, images of Black Pete, or Zwarte Piet, that use blackface features and stereotyping characteristics will also be removed, the company said. Zwarte Piet is a sidekick of Sinterklaas, the Dutch version of St. Nicholas, a Santa-like character who brings children gifts in early December.
White people often don blackface makeup, red lipstick, and curly black wigs to play Black Pete during street parties honoring Sinterklaas.
The character has been at the center of fierce and increasingly polarised debate in recent years between opponents who decry him as a racist caricature and supporters who defend him as an integral part of a cherished Dutch tradition. As a result, some towns and cities have phased out blackface at street parties.
An organisation called Netherlands Is Improving welcomed the news. “August 11 is a happy day: From today, Black Pete is officially no longer welcome worldwide on Facebook and Instagram,” the group said.
Others were less inclined to celebrate. Populist lawmaker Geert Wilders tweeted a photo of a Black Pete shortly after the Facebook announcement accompanied by the text: “Facebook and Instagram ban images of Zwarte Piet. The totalitarian state of the intolerant nagging left-wing anti-racists is getting closer.”
(If you need support or know someone who does, please reach out to your nearest mental health specialist.) Helplines: 1) Vandrevala Foundation for Mental Health – / (24 hours) 2) TISS iCall – (Monday-Saturday: 8 am to 10 pm)