Facebook has removed more than 150 networks of coordinated fake activity since 2017, the report said. Twenty-seven networks have been linked to Russia, and 23 to Iran. Nine originated within the United States.
The US remains the primary target for foreign influence campaigns, Facebook’s report said, highlighting 26 such efforts by a variety of sources from 2017 to 2020. (Ukraine follows as a distant second.)
However, during the 2020 election season, it was US domestic actors, not foreign operatives, who were increasingly responsible for sowing disinformation. In the run-up to the election, Facebook removed as many American networks targeting the US with so-called coordinated inauthentic behavior (CIB) as it did Russian or Iranian networks, the company’s report said.
“Most notably, one of the CIB networks we found was operated by Rally Forge, a US-based marketing firm, working on behalf of its clients including the Political Action Committee Turning Point USA,” the report said. “This campaign leveraged authentic communities and recruited a staff of teenagers to run fake and duplicate accounts posing as unaffiliated voters to comment on news Pages and Pages of political actors.”
The discovery of those campaigns has led to intense political and regulatory pressure on Big Tech and also raised persistent questions about the industry’s disproportionate power in politics and the wider economy. Many critics have since called for the breakup of large tech companies and legislation governing how social media platforms moderate the content on their websites.
Tech companies such as Facebook have responded by hiring more content moderators and establishing new platform policies on fake activity.
In a separate announcement Wednesday, Facebook said it is expanding the penalties it applies to individual Facebook users who repeatedly share misinformation debunked by its fact-checking partners. Currently, when a user shares a post that contains debunked claims, Facebook’s algorithms demote that post in its news feed, making it less visible to other users. But under Wednesday’s change, repeat offenders may risk having all of their posts demoted going forward.
Facebook had already been applying blanket account-level demotions to pages and groups that repeatedly share fact-checked misinformation, it said, but Wednesday’s announcement covers individual users for the first time. (Politicians’ accounts are not covered by the change because political figures are exempt from Facebook’s fact-checking program.)
But even as Facebook has improved its moderation efforts, many covert purveyors of misinformation have evolved their tactics, the report said. From creating more tailored and targeted campaigns that can evade detection to outsourcing their campaigns to third parties, threat actors are trying to adapt to Facebook’s enforcement in an ever more complex game of cat-and-mouse, according to the company.