- MIT launched the Reconnaissance of Influence Operations (RIO) program to automatically detect disinformation narratives online.
- In the days leading up to the 2017 French election, researchers were able to identify disinformation accounts with 96% precision using the RIO system.
- The team hopes to develop this RIO tool further, so that it can be used by both governments and industries to defend against disinformation.
Disinformation campaigns are not new — think of wartime propaganda used to sway public opinion against an enemy. What is new, however, is the use of the internet and social media to spread these campaigns. The spread of disinformation via social media has the power to change elections, strengthen conspiracy theories, and sow discord.
Steven Smith, a staff member from MIT Lincoln Laboratory’s Artificial Intelligence Software Architectures and Algorithms Group, is part of a team that set out to better understand these campaigns by launching the Reconnaissance of Influence Operations (RIO) program. Their goal was to create a system that would automatically detect disinformation narratives as well as those individuals who are spreading the narratives within social media networks. Earlier this year, the team published a paper on their work in the Proceedings of the National Academy of Sciences and they received an R&D 100 award last fall.
Have you read?
The project originated in 2014 when Smith and colleagues were studying how malicious groups could exploit social media. They noticed increased and unusual activity in social media data from accounts that had the appearance of pushing pro-Russian narratives.
"We were kind of scratching our heads," Smith says of the data. So the team applied for internal funding through the laboratory’s Technology Office and launched the program in order to study whether similar techniques would be used in the 2017 French elections.
In the 30 days leading up to the election, the RIO team collected real-time social media data to search for and analyze the spread of disinformation. In total, they compiled 28 million Twitter posts from 1 million accounts. Then, using the RIO system, they were able to detect disinformation accounts with 96 percent precision.
What makes the RIO system unique is that it combines multiple analytics techniques in order to create a comprehensive view of where and how the disinformation narratives are spreading.
"If you are trying to answer the question of who is influential on a social network, traditionally, people look at activity counts," says Edward Kao, who is another member of the research team. On Twitter, for example, analysts would consider the number of tweets and retweets. "What we found is that in many cases this is not sufficient. It doesn’t actually tell you the impact of the accounts on the social network."
As part of Kao’s PhD work in the laboratory’s Lincoln Scholars program, a tuition fellowship program, he developed a statistical approach — now used in RIO — to help determine not only whether a social media account is spreading disinformation but also how much the account causes the network as a whole to change and amplify the message.
Erika Mackin, another research team member, also applied a new machine learning approach that helps RIO to classify these accounts by looking into data related to behaviors such as whether the account interacts with foreign media and what languages it uses. This approach allows RIO to detect hostile accounts that are active in diverse campaigns, ranging from the 2017 French presidential elections to the spread of Covid-19 disinformation.
Another unique aspect of RIO is that it can detect and quantify the impact of accounts operated by both bots and humans, whereas most automated systems in use today detect bots only. RIO also has the ability to help those using the system to forecast how different countermeasures might halt the spread of a particular disinformation campaign.
What is the World Economic Forum doing on cybersecurity?
The World Economic Forum’s Centre for Cybersecurity is leading the global response to address systemic cybersecurity challenges and improve digital trust. The centre is an independent and impartial platform committed to fostering international dialogues and collaboration on cybersecurity in the public and private sectors.
Since its launch, the centre has driven impact throughout the cybersecurity ecosystem:
- Training a new generation of cybersecurity experts
Salesforce, Fortinet and the Global Cyber Alliance, in partnership with the Forum, are delivering free and globally accessible training through the Cybersecurity Learning Hub.
- Building a global response to cybersecurity risks
The Forum, in collaboration with the University of Oxford – Oxford Martin School, Palo Alto Networks, Mastercard, KPMG, Europol, European Network and Information Security Agency, and the US National Institute of Standards and Technology, is identifying future global risks from next-generation technology.
- Improving cybersecurity in the aviation industry
Through the Cyber Resilience in the Aviation Industry initiative, the centre has been improving cyber resilience in aviation in collaboration with Deloitte and more than 50 other companies and international organizations.
- Making the global electricity ecosystem more cyber resilient
The centre and the Platform for Shaping the Future of Energy, Materials and Infrastructure have been bringing together leaders from more than 50 businesses, governments, civil society and academia to develop a clear and coherent cybersecurity vision for the electricity industry.
- The Council on the Connected World agreed on IoT security requirements for consumer-facing devices to protect them from cybers threats, calling on the world’s biggest manufacturers and vendors to take action for better IoT security.
- The Forum is also a signatory of the Paris Call for Trust and Security in Cyberspace, which aims to ensure global digital peace and security.
Contact us for more information on how to get involved.
The team envisions RIO being used by both government and industry as well as beyond social media and in the realm of traditional media such as newspapers and television. Currently, they are working with West Point student Joseph Schlessinger, who is also a graduate student at MIT and a military fellow at Lincoln Laboratory, to understand how narratives spread across European media outlets. A new follow-on program is also underway to dive into the cognitive aspects of influence operations and how individual attitudes and behaviors are affected by disinformation.
“Defending against disinformation is not only a matter of national security, but also about protecting democracy,” says Kao.