Facebook said Monday that it had recently found and taken down four state-backed disinformation campaigns, the latest of dozens that it has identified and removed this year and a sign of how foreign interference online is increasing ahead of the 2020 presidential election.
Three of the disinformation campaigns originated in Iran and one in Russia, Facebook said, with state-backed actors disguised as genuine users. The campaigns were aimed at people in North Africa, Latin America and the United States, the company said.
The posts crossed categories and ideological lines, seemingly with no specific intent other than to foment discord. Some of the posts touched on conflict in the Middle East, while others pointed to racial strife and some invoked Rep Alexandria Ocasio-Cortez, according to examples provided by Facebook.
One of the campaigns focused more on the 2020 election. In that campaign, 50 accounts linked to Russia’s Internet Research Agency — a Kremlin-backed professional troll farm — targeted candidates for the Democratic presidential nomination including former Vice President Joe Biden and Sens. Bernie Sanders and Elizabeth Warren, according to an analysis from Graphika, a social media research firm. Roughly half of those accounts claimed to be based in swing states. The Internet Research Agency was also responsible for targeting the US electorate during the 2016 presidential election.
Facebook said that it did not allow “coordinated inauthentic behaviour” and said it would be more transparent about where posts were coming from and would better verify the identities of those putting up messages and ads. Among other measures, the company rolled out new features Monday to label whether posts were coming from state-sponsored media outlets.
The revelations of the new disinformation campaigns highlight the difficulties that Facebook faces with its stance on free expression, a position that its chief executive, Mark Zuckerberg, emphasised last week. In a speech at Georgetown University on Thursday, Zuckerberg extolled the virtues of unfettered expression and how everyone should have a voice on the social network. But that approach has opened the door for foreign operatives and others to spread conspiracy theories, inflammatory messages and false news through Facebook.
In a conference call Monday about the disinformation campaigns and election security measures, Zuckerberg said his company was better equipped to handle false information on the site now.
“Elections have changed significantly since 2016, but Facebook has changed, too,” he said. “We’ve gone from being on our back foot to now proactively going after some of the biggest threats that are out there.”
Facebook has been under pressure amid a near-daily torrent of criticism from US presidential candidates, the public, the media and regulators around the world, many of whom argue that the company is unable to properly corral its outsize power.
Warren, a front-runner for the Democratic presidential nomination, recently accused Facebook of being a “disinformation-for-profit machine” because it allowed false information from political leaders to circulate under its free-speech stance. The Federal Trade Commission and the Justice Department are investigating Facebook’s market power and history of technology acquisitions.
To combat the critics, Zuckerberg has ramped up his public appearances. He recently gave several interviews to conservative and liberal media outlets, in addition to his robust defense of his company’s policies at Georgetown University. On Wednesday, he will again be in the spotlight when he is scheduled to testify before congressional lawmakers about Facebook’s troubled cryptocurrency effort, called Libra.
In the conference call Monday, Zuckerberg said Facebook had become better able to seek out and remove foreign influence networks, relying on a team of former intelligence officials, digital forensics experts and investigative journalists. Facebook has more than 35,000 people working on its security initiatives, with an annual budget well into the billions of dollars.
“Three years ago, big tech companies like Facebook were essentially in denial about all of this,” said Ben Nimmo, head of investigations at Graphika. “Now, they’re actively hunting.”
The company has also embarked on closer, information-sharing partnerships with other tech companies like Twitter, Google and Microsoft. And since 2016, Facebook has strengthened its relationships with government agencies, like the FBI, and those in other countries outside the United States.
But as Facebook has honed its skills, so have its adversaries. Nathaniel Gleicher, Facebook’s head of cybersecurity policy, said that there had been an escalation of sophisticated attacks coming from Iran and China — beyond the disinformation campaigns from Russia in 2016 — which suggested that the practice had grown more popular over the past few years.
“You have two guarantees in this space,” Gleicher said. “The first guarantee is that the bad guys are going to keep trying to do this. The second guarantee is that as us and our partners in civil society and as our partners in industry continue to work together on this, we’re making it harder and harder and harder for them to do this.”
Facebook does not want to be an arbiter of what speech is allowed on its site, but it said it wanted to be more transparent about where the speech is coming from. To that end, it will now apply labels to pages considered state-sponsored media — including outlets like the broadcaster Russia Today — to inform people whether the outlets are wholly or partially under the editorial control of their country’s government. The company will also apply the labels to the outlet’s Facebook Page, as well as make the label visible inside the social network’s advertising library.
“We will hold these Pages to a higher standard of transparency because they combine the opinion-making influence of a media organisation with the strategic backing of a state,” Facebook said in a blog post.
The company said it developed its definition of state-sponsored media with input from more than 40 outside global organizations, including Reporters Without Borders, the European Journalism Center, UNESCO and the Center for Media, Data and Society.
The company will also more prominently label posts on Facebook and on its Instagram app that have been deemed partly or wholly false by outside fact-checking organisations. Facebook said the change was meant to help people better determine what they should read, trust and share. The label will be displayed prominently on top of photos and videos that appear in the news feed, as well as across Instagram stories.
How much of a difference the labels will make is unclear. Facebook and Instagram are home to more than 2.7 billion regular users, and billions of pieces of content are shared to their respective networks daily. Fact-checked news and posts represent a fraction of that content. A wealth of information is also spread privately across Facebook’s messaging services like WhatsApp and Messenger, two conduits that have been identified as prime channels for spreading misinformation.
Renee DiResta, the technical research manager for the Stanford Internet Observatory, called Facebook’s new measures at fighting disinformation “commendable.” But she also said it was “incongruous” for Facebook “to reiterate a commitment to fighting misinformation” even as it has permitted political leaders to put false information in posts and ads.
Zuckerberg said he believed moves like those he announced Monday, along with building more sophisticated artificial intelligence systems and other preventive technology, would allow Facebook to offer its platform to more people while mitigating harm on the social network.
“We built systems to fight interference that we believe are more advanced than what any other company is doing and most governments,” he said. “Personally, this is one of my top priorities for the company.”