Close
Information Space Integrity

How to Help Civil Society’s Disinformation Researchers Flourish

By Rachelle Faust and Daniel Cebul

Disinformation has become big news. In 2019, the Oxford Internet Institute identified media reporting on organized social media manipulation campaigns in seventy countries—more than a two-fold increase from 2017. The COVID-19 pandemic has accelerated the disinformation challenge further, injecting a cacophony of health-related misinformation into the public domain (what some have called an “infodemic”).

Civil society organizations (CSOs) are an important part of the response to this global challenge, and their understanding and capacity have grown in recent years. A recent paper authored by Samantha Bradshaw and Lisa-Maria Neudert, and published by the International Forum for Democratic Studies, maps 175 CSOs working on counter-disinformation issues, finding that CSOs in many countries developed a wide range of skills including digital media literacy training, fact-checking and verification, and policy advocacy.

The authors consulted nineteen experts from leading CSOs based around the world through a combination of interviews and surveys to identify the most pressing obstacles that impede their work. Consistent with findings on counter-disinformation efforts from the International Republican Institute, EU Disinfo Lab, and the Carnegie Endowment for International Peace, the authors identified limited access to social media data, inconsistent and inflexible funding, and piecemeal collaboration within and between CSOs and the private sector as the most significant challenges facing organizations working to combat digital disinformation.

The authors identified limited access to social media data, inconsistent and inflexible funding, and piecemeal collaboration within and between CSOs and the private sector as the most significant challenges facing organizations working to combat digital disinformation.

CSOs struggle to develop consensus on the content and extent of digital disinformation campaigns—due, in large part, to a lack of access to crucial data from social media platforms. Even when researchers—at least, in the U.S. context—gain access to platform data, it is often through restrictive interfaces that impede the creation of large-scale datasets needed for meaningful analysis. In other instances, platforms feigning cooperation provided only non-machine-readable materials with no context, hindering more efficient data analysis efforts. Platforms have also been known to effectively limit the scope of disinformation research projects by withholding crucial information—like user, community, and behavioral data—that could help researchers understand how communities interact with misleading or false narratives online. Improved access to meaningful data would allow researchers to interpret the scope and scale of disinformation more effectively in real world contexts and facilitate the development of interventions that might mitigate the viral spread of misleading content.

Although some social media companies have launched seemingly collaborative initiatives to combat the spread of disinformation on their platforms, these partnerships do not always benefit researchers. For example, Facebook’s partnerships with media and academia have been criticized for generating conflicts of interest between funders and researchers. When such projects are implemented without mechanisms in place to protect communities and create accountability, funding for research on disinformation runs the risk of becoming encumbered by corporations looking to protect their reputations.

While much research has focused on mainstream social media platforms like Facebook and Twitter, increased scrutiny has in some cases pushed disinformation campaigns off those platforms and onto others. In 2018, after Facebook and Google partnered with 24 Brazilian newsrooms to debunk false content, disinformation campaigns then migrated to WhatsApp, where encryption protocols made it even more difficult for researchers to monitor misleading narratives.

Limited funding opportunities have increased competition and reduced collaboration between researchers, resulting in duplicative research efforts that minimize the impact of what limited funding is available.

Inconsistent and inflexible funding is a substantial obstacle for researchers seeking to redirect attention to private messaging platforms such as WhatsApp—as was necessary in Brazil—or even less-mainstream networks like Gab, Parler, or Viber. Based on their interviews with CSO representatives, Bradshaw and Neudert found that funders give preference to research projects led by large, established organizations and using proven methods to minimize risk, leaving researchers to “chase the latest trends in tech policy, rather than think about long-term impact.” Limited funding opportunities have increased competition and reduced collaboration between researchers, resulting in duplicative research efforts that minimize the impact of what limited funding is available. As malicious actors increasingly invest in digital disinformation campaigns to shape global narratives—Russia’s RT has a reported yearly operating budget of more than $300 million, to name one example—reciprocal funding for disinformation research is needed.

These challenges are among those CSOs face in countering disinformation effectively; while significant, they are not intractable. Based on their findings, the authors offer several recommendations:

  • Funders can be part of the solution by working with disinformation researchers in the Global South, especially on new and innovative projects that focus on understudied platforms and other sources of disinformation. Researchers should also work to establish institutional processes for securing long-term funding, drawing on lessons from other fields.
  • Social media platforms should work with researchers to provide transparent and meaningful access to platform data in a way that respects user privacy.
  • Civil society organizations must eliminate siloes, working to share best practices while combining technical expertise with cultural and political knowledge. Initiatives like the Partnership for Countering Influence Operations Researchers’ Guild present interesting models for such cooperation.

 

Multistakeholder collaboration that encompasses these efforts—by tech companies, governments, researchers, and civil society—will be a critical component of a comprehensive response to digital disinformation.

 

Rachelle Faust is an assistant program officer and Daniel Cebul is a program assistant at the National Endowment for Democracy’s International Forum for Democratic Studies. Follow Daniel on Twitter @DCebul.

The views expressed in this post represent the opinions and analysis of the author and do not necessarily reflect those of the National Endowment for Democracy or its staff.

 

Image Credit: lovelyday12 / Shutterstock.com