Facebook says it took down 22.1 million pieces of content for hate speech in Q3 2020, 95 percent of which was proactively identified.
The social media giant released its quarterly Community Standards Enforcement Report on November 19th, which includes metrics on how it enforces its policies.
Facebook notes that artificial intelligence now proactively detects 94.7 percent of hate speech that it removes from its platform. This number is up from the 80.5 percent reported a year ago, and up from just 24 percent in 2017.
“In Q3 2020, hate speech prevalence was 0.10 percent to 0.11 percent or 10 to 11 views of hate speech for every 10,000 views of content,” the report notes.
The report also revealed that Facebook took down 19.2 million pieces of violent and graphic content. This number is up from the 15 million it reported in the last quarter.
Further, Facebook removed 12.4 million pieces of child nudity and sexual exploitation content in the quarter, increasing from the 9.5 million reported in Q2. It also took down 3.5 million posts for bullying or harassment, increasing from the 2.4 million from Q2.
As for Instagram, the social media giant removed 6.5 million posts for hate speech, and 4.1 million for violent and graphic content. It also took down 2.6 million posts that included bullying or harassment.
The report also revealed that Facebook took down 1.3 million pieces of suicide and self-injury content on Instagram.
Facebook says that these metrics show its progress towards catching harmful content, and that it is going to continue improving its enforcement efforts to remove this type of content from its apps.
Source: Facebook