Close
Information Space Integrity

Democracy and the Next Big App: Three Questions for Assessing Risk

By Dean Jackson and Rachelle Faust

In April 2020, TikTok surpassed two billion global downloads. That June, the Indian government banned the video-sharing platform—along with 58 more apps owned by Chinese companies—and other countries, including Australia and the United States, closely scrutinized TikTok’s policies and practices.

Apprehension over TikTok and the multipurpose messaging app WeChat, which also faced a possible ban in the United States, centers on their origins in the People’s Republic of China (PRC). The PRC’s tight grip on strategic technologies causes a growing number of international observers to worry that such “parallel platforms” developed in authoritarian settings pose unique risks to freedom of expression and national security. Others point out that U.S.-based social media companies have their own track record of troublesome behavior and that tech platforms present longstanding and ubiquitous challenges, regardless of origin.

The threat of authoritarian influence on platforms is not novel or exclusive to TikTok. In 2017, Ukraine banned several websites and apps including VKontakte, a Russia-based social media network owned by Moscow’s political allies, in response to Russian information operations during the military conflict between the two countries. This incident speaks to the dilemma that can arise and may be instructional for policymakers concerned about the national security implications of foreign owned platforms: The ban largely failed to constrain information operations, had negative economic implications for Ukraine, further polarized the country’s divided media space, and invited significant backlash.

Today, even governments which are not overtly authoritarian are evaluating risks from the global internet. Without clear and equitable standards to navigate these challenges, this trend risks the nationalization of the world’s digital information landscape. Rules-based, rights-conscious approaches to curtailing risks from social media applications regardless of provenance can forestall that outcome and address concerns about ulterior motives.

Below are three broad questions to begin such a standards-setting exercise:

 

How is content moderated and what are the implications for free speech?

Content moderation failures contribute to social ills including ethnic violence and the spread of anti-vaccine conspiracism. Consequently, platforms face intensifying pressure to evaluate and remove content, underpinning both real censorship concerns and false claims of censorship by political opportunists looking to pressure tech companies.

Most of the early U.S.-based platforms at least gave a nod to norms governing free expression: The evolution of Twitter’s terms of service, for instance, reflects a transition from free speech absolutism toward a more nuanced approach. TikTok lacks the same starting point: Leaked content moderation documents show that TikTok influenced content by censoring videos mentioning Tiananmen Square or Tibetan independence. Criticism of police or government officials in China is similarly removed or deemphasized, though TikTok claims the leaked documents are outdated.

WeChat is a more explicit tool for state censorship. Unlike TikTok, which maintains separate apps for users in China and the rest of the world, WeChat straddles the Great Firewall. Users on both sides of China’s borders can communicate with one another and face similar censorship constraints. This arrangement has concerning implications for free speech, including censorship of politicians in democracies communicating with Mandarin-speaking constituents.

Though maligned and constrained, Facebook’s Oversight Board is an example of efforts taken by some platforms to ameliorate these types of censorship risks. Transparency and resolution mechanisms are an important means of assessing the risk of platform-imposed constraints on free expression.

 

How are personal data collected, stored, used, and shared by social media platforms?

In July 2020, researchers discovered that TikTok was reading users’ clipboards on iOS, jeopardizing sensitive data, such as passwords and credit-card information. While this is just one example of the types of data TikTok collects on users, this behavior is not uncommon—more than fifty other applications, including LinkedIn and Reddit, also access sensitive data.

A crucial difference, however, is that TikTok’s parent company is beholden to the PRC’s intelligence apparatus, which keeps its tech sector tightly tethered and has been implicated in sweeping international breaches of data on foreign citizens. The risk that it may access private data on TikTok users is serious—though as previous breaches show, no data trove is perfectly secure, meaning that widespread collection of user data by other platforms may also make that data vulnerable to authoritarian access, if less directly so.

While these risks have real national security implications, it is easy for governments to use security concerns as cover for economic competition or domestic censorship when banning or restricting individual apps. Consistent rules protecting user data across all platforms operating within a given democratic setting would safeguard against foreign government abuse, reinforce democratic norms, and provide a stronger rationale for action against platforms which cannot demonstrate they follow the rules.

Disclosure requirements to provide greater transparency around government requests for user data are another crucial tool: Discussing the case of WeChat with the Washington Post, one researcher noted, “If that sort of transparency were necessary and people understood the risks…then maybe we wouldn’t have to worry about whether to ban it.”

 

How capable are platforms of responding to disinformation from state actors?

Concerns about the spread of disinformation have pressured platforms to moderate misleading content more aggressively. TikTok is no exception: In response to this pressure, the platform banned the QAnon conspiracy community and joined an EU Code of Practice on Disinformation.

However, these questions about data protection raise a concern that is specific to TikTok. In the long-term, data on its growing global user base could make the app an asset for future CCP information operations. As Beijing demonstrates growing willingness to adopt divisive, negative messaging, it is possible that a rich supply of data on target populations and a trusted, compelling delivery mechanism in the form of TikTok’s recommendation algorithm could prove too tempting not to employ.

This form of vulnerability is fundamentally different from that affecting Facebook or Twitter, which operate with far stricter legal protections from government interference. Are there ways to protect TikTok and other platforms in its position from this kind of government imposition, or to sound alarms should it occur?

 

Conclusion

These are high-stakes questions. While no platform is perfect, without clear operating standards, the risk that governments will abuse and manipulate access to these platforms is grave.

Observers don’t have to look hard for contemporary examples. In January 2021, Uganda blocked Facebook ahead of the country’s elections after the platform removed a network of bot accounts boosting the president’s reelection campaign. The next month, in response to a legal notice from the Indian government, Twitter removed—and then mostly restored—protester and media outlet accounts that were critical of the Indian government. In response to the restoration, New Delhi threatened Twitter employees with imprisonment.

Even if applications like TikTok pose real risks to free expression and national security, it is easy to imagine how seemingly arbitrary bans and restrictions could justify future threats to the internet’s global, democratic nature. The rule of law is a cornerstone of democratic governance, and a better and more thoughtful approach to mitigating the risk of authoritarian influence on social media would be similarly built on clear, consistent, and enforceable rules.

 

Dean Jackson is a program officer and Rachelle Faust is an assistant program officer at the National Endowment for Democracy’s International Forum for Democratic Studies. Follow Dean on Twitter @DWJ88.

The views expressed in this post represent the opinions and analysis of the authors and do not necessarily reflect those of the National Endowment for Democracy or its staff.

 

Image Credit: Lenka Horavova / Shutterstock.com