Facebook's Coordinated Inauthentic Behavior - An OSINT AnalysisReturn Home

Key Takeaways

Introduction

In this era of the internet and social media, the ability to control information and to spread disinformation has become easier and more consequential. With #WWIII a trending hashtag in early 2020, this connection between social media and the battle for global hegemony by steering the public debate and manipulating narratives or even to “to steal and spread damaging information and target vulnerable election systems ahead of the 2020 election” seems to be only getting stronger.

Facebook’s purging of accounts and content has been a common strategy to tackle the spread of disinformation and inauthentic operations on its platforms. Introduced in 2018, as “Coordinated Inauthentic Behavior” (CIB), Facebook regularly announces the removal of dozens of pages and accounts targeting and originating in different countries.

Nathaniel Gleicher, Facebook’s Head of Cybersecurity Policy explains that the term is different from ‘fake news’. He defines it as, “when groups of pages or people work together to mislead others about who they are or what they are doing.”

Facebook claims that the removal is based on the deceptive ‘behavior’ and not the ‘content’ being shared. That means the post itself may not be false and may not go against the Community Standards. However, it is not clear if the decision to remove such activity is influenced by any other factors.

Activists in some countries have been very vocal against Facebook’s policy, as it has been used to stifle popular protests against repressive regimes. In Algeria, for example, where demonstrators have held weekly protests since February 2019, the U.S. social media company shut down not only the accounts of trolls and disinformation campaigns (dubbed as “electronic flies” or “doubab electroni” in Algerian Arabic), but also the accounts of legitimate protestors criticizing the government. In November, this results in several protests from expatriated Algerians in front of the Facebook offices in London, Paris and other European cities.

At the current moment, it is unclear if campaign orchestrators adapted their strategies as part of more sophisticated campaigns to flag legitimate accounts as false positive by the Facebook “CIB detection algorithm(s)”, since the algorithm does not take decision based on content.

The public learns about those inauthentic campaigns via meager details published on Facebook’s Newsroom. The details usually present the campaign’s country of origin, targeted countries, campaign budget, and numbers of accounts and followers. Facebook Newsroom sometimes releases samples of the removed posts.

The audience who came in contact with the inauthentic content (account followers, group members, event attendees, etc.) are not notified directly, and Facebook does not provide any further details or conduct a public briefing.

Although Instagram campaigns are included in the published reports on Facebook Newsroom, Facebook has been vague about campaigns run via WhatsApp — which is also a Facebook product — with the exception of a handful of WhatsApp users who have been directly targeted and then notified by Facebook.

Open-Source Dataset

The data in this analysis was manually extracted and centralized from articles released on Facebook Newsroom. Missing information is due to the lack of information shared by Facebook (e.g., the incomplete budget data in the dataset).

We assigned keywords to each campaign based on its content to allow clusterization of actors and campaigns to complement the attribution model used by Facebook for attribution activity of cyber actors. More information can be found on Facebook Newsroom on the Four Methods of Attribution (Political Motivations, Coordination, TTPs, and IOCs) used by Facebook, published by former Facebook’s Chief Security Officer, Alex Stamos, in July 2018 before his departure to join Stanford University’s Internet Observatory.

The full data is publicly available on GitHub1 and used as a direct data source for all the data visualization in this article to provide additional transparency on the data and content of this analysis. If you spot any mistakes, or anything missing - contributions (a.k.a pull requests) are welcomed!

The goal of this project is to provide a better overview of the disinformation campaigns identified by Facebook, but also to encourage further transparency from Facebook to provide more data on each of them as we are approaching several governmental leadership elections.

It is but common sense that social media companies need to increase the transparency of their platforms. This is not an impossible challenge, as Wikipedia has shown by its own success in achieving transparency. The goal of this project is mainly to provide a better overview of the inauthentic campaigns identified by Facebook, but also to encourage further transparency from social media giants to provide more data on mass manipulation campaigns. This is especially urgent and time sensitive as several nations around the world will be holding key governmental leadership elections in the coming weeks and months.

Interactive Map

It is no secret that many governments, organized groups, and individuals turn to social media manipulation in order to steer the public debate, both locally and internationally. Many media outlets have reported on the rise of interference in democratic elections with the aim of sidetracking legitimate political discussions and influencing public opinion.

The interactive map below displays aggregated data from the Facebook Newsroom reports on identified and removed CIB global networks; it represents announcements from July 2018.

Each bubble represents the country where the campaign originated, whereas the lines represent the targeted countries. The color gradient displays the value of the variable, with darker colors representing higher aggregated values.


Notes

  1. Accounts include Facebook accounts, Facebook pages, Facebook groups, and Instagram accounts.
  2. Audience includes Facebook page followers, Facebook groups members, and Instagram followers.
  3. Hover over the country for further details about the CIB campaign.
  4. You can zoom into different sections of the map.

Budget

It is unclear how Facebook calculates the budget for each campaign or inauthentic network. In some instances, Facebook discloses the duration of the campaign based on duration of the ads purchased (i.e., start and end dates). This, in some way, is used as a metric to calculate the budget of the campaigns. However, the budget was not provided for all the identified and removed campaigns, and inauthentic operations do not only operate with paid ads. While actual numbers may be a lot higher, the graph below represents data from the disclosed information released on Facebook Newsroom.

Facebook also did not provide much information behind a joint campaign between the US and Vietnam that had a considerable budget of 10M USD and taken down in December 2019. According to Facebook, the individuals behind this campaign evaded detection algorithms by creating a combination of fake and authentic accounts of local individuals in the US to manage pages and groups; “some of these accounts used profile photos generated by artificial intelligence and masqueraded as Americans” to post content and join groups.

Notes

  1. The slider above the chart provides a convenient way to control for disproportionate campaigns and to explore the smaller budgets trumped by the outliers.

Keywords

In its released articles, Facebook does not elaborate on the content of the removed campaigns, making this model a dynamic work in progress. It can be modified as more information is shared by Facebook and other stakeholders. Nonetheless, the campaigns are often engaged in foreign interference and seek to manipulate the public debate. The content of the removed campaigns mainly focuses on political topics as narrow as targeting a public official or as broad as government foreign policy.

The heatmap below displays the prevalence of topics and themes per campaign. At the time of writing this analysis, the topic of “Elections” was strikingly the most prevalent and common topic amongst campaigns, irrelevant of country of origin.

The ‘Total’ filter represents the sum of the occurences a given keyword $keyword_n$ for each country such as $\sum\limits_{i=1}^k keyword_n{country{_i}}$

And the ‘Occurrences’ filter sorts keywords per their highest occurences from each country a given keyword $keyword_n$ such as $\max_{x \in [1, i]} keyword_n{country{_x}}$

Operational Model

Inauthentic operations are not produced in a vacuum; from the data gathered on Facebook’s CIB campaigns, four models emerge highlighting a common operational process.

Four CIB Operational Models

OptionState-SponsoredIn-house StaffAdvertising and PRClickbait
Politics-Profit MixPoliticalPoliticalBoth political and profit-drivenPrimarily profit-driven
Campaign OrchestratorGovernments and foreign statesIncumbent politician or political contenderVariousVarious
Source of FundsGovernment fundsGovernment funds if incumbent, politicians’ and donors’ funds if contenderCorporate and/or political projectsVarious
Main ObjectivesDiscredit opposition voices; mobilize support for administration policyDefend political against attacks; attack opponents; create illusions of support and engagement for a politicianImage-building; avert scandal; divert public attention; engineer virality; hack public attentionMaintain high engagement to articles via likes and shares; grow follower base of social media page; generate revenue from ad tech
ExamplesThe Chinese government targeting protesters in Hong Kong in campaigns removed in August 2019Internet Research Agency (IRA) and Yevgeny Prigozhin, a Russian businessman with ties with President Putin named in the October 2019 campaign originating in Russia and targeting the USAEpoch Media Group, a US-based media organization running a $10M campaign originating in and targeting the USA and Vietnam. Archimedes Group in Israel (May 2015). MintReach in Nigeria and Flexell in Egypt, InsightID in Indonesia involved in campaigns (October 2019).Networks of Pages using fake accounts or multiple accounts. They post clickbait posts on these Pages to drive people to websites that are entirely separate from Facebook and seem legitimate, but are actually ad farms

The model above is influenced by a report on Political Trolling in the Philippines published by the NATO Strategic Communications Centre of Excellence.

Facebook and Transparency

In an effort to increase transparency regarding shared content, Facebook introduced new features and updates in a blog post released on Facebook Newsroom (January 9, 2020). The Ad Library claims to give the public more agency and control over which political and social ads they see. However, so far Facebook still does not inform users who have been exposed to inauthentic content. For more, check out Facebook’s self-reporting here, and on their Newsroom pages.

“We are making progress rooting out this abuse, but as we’ve said before, it’s an ongoing challenge. We’re committed to continually improving to stay ahead. That means building better technology, hiring more people and working closer with law enforcement, security experts and other companies.” - Nathaniel Gleicher, Head of Security Policy (December 20, 2019)

Closing Remarks

Facebook (and other social media giants) are failing at keeping up with the spread of disinformation and media manipulation. As mentioned before, it is still unclear if campaign orchestrators are also adapting their strategy to make legitimate accounts of opposition voices look like false positives, which would mean that actors have found a way to weaponize Facebook even further as an anti-democratic tool. To mitigate the ramifications this has had so far, more information urgently needs to be disclosed and released by Facebook in order to increase transparency on their processes. We have come to a point where shutting down billions of accounts. is simply not be enough.

A similar project for Twitter information operations is on the roadmap since Twitter announced in October 2019 to have incorporated “data and insights regarding impersonation policy enforcement, as well as state-backed information operations datasets” in their transparency reports and datasets. Again, we encourage you to contribute to this effort by sharing your ideas and constructive feedback.