Media/Information/Technology

China’s Ambitions in Artificial Intelligence: A Challenge to the Future of Democracy?

By Elsa B. Kania

Although artificial intelligence (AI) has immense potential for good, its rapid advance is also provoking intense anxieties. Among concerns about “killer robots,” “summoning the demon,” and potential existential threats to humanity, perhaps a graver threat is already immediate—the danger to human rights—and even the future of democracy. At a time when democracies are only beginning to grapple with the potential implications of AI, China’s emergence as an “AI superpower” is spurring troubling trends. Beijing is developing and employing these technologies to bolster the power of the Party-state, while creating capabilities that may diffuse to strengthen authoritarianism and jeopardize democratic governance worldwide.

China’s embrace of big data and artificial intelligence has included an enthusiastic exploitation of their potential in censorship, surveillance, and societal engineering in ways that can threaten privacy and human rights. The Chinese Communist Party (CCP) sees AI as an ideal instrument of social control that can enhance its capability to ensure stability and thus, its own regime security. The rapid deployment of facial recognition is creating an unparalleled panopticon, in which the potential for surveillance is pervasive, even when the technologies remain imperfect in actuality. The introduction of DNA and voice biometrics enables a full-spectrum approach to monitoring, as research from Human Rights Watch has revealed. At the center of these efforts are Chinese tech companies—and even certain American and international companies—that see opportunities in working closely with the Chinese state, particularly the Ministry of Public Security. While these techniques may be pioneered in Xinjiang—where hundreds of thousands of Uighurs are in mass detention—their deployment elsewhere within China may follow.

China’s embrace of big data and artificial intelligence has included an enthusiastic exploitation of their potential in censorship, surveillance, and societal engineering in ways that can threaten privacy and human rights.

In China, the distinction between control and governance is often blurred, deliberately so. China’s “smart cities,” marketed as more livable, appealing metropolises, may also be designed to facilitate and optimize control. China’s patchwork social credit system may be popular with some and serve valuable functions for consumers in a low-trust society, but it must also be recognized as inextricably linked to CCP approaches to “managing” society, as Dr. Samantha Hoffman’s research has highlighted. The capability to monitor and respond to public opinion may seem to be a hallmark of responsive government. However, it can also enable the CCP to target repression and enhance manipulation of the public sphere. The CCP’s facility in leveraging these techniques may also prove to be a “dual-use” capability, enabling offensive employment against Taiwan and the exercise of influence within China’s neighborhood and beyond.

For the Chinese Party-state, reliance upon these technologies may also create new challenges and vulnerabilities. The implementation of these systems at scale is daunting, particularly considering their relative immaturity and limitations. However, simply creating the psychology and perception of a Panopticon can constrain behavior in ways that are subtle but pervasive. At the same time, the creativity of Chinese society will continue to result in tactics for evasion, such as the use of emojis (rice bunny, 米兔) to circumvent censorship of #MeToo. Certain technologies may also prove to be double-edged swords for the Chinese government, as in the use of blockchain to prevent censorship of reports of sexual assault and the vaccine scandal.

However, China’s unexpected successes in controlling the Internet indicate that the regime’s capacity for reshaping the technological ecosystem to its advantage should not be discounted. China’s AI plans highlight the importance of “secure and controllable” technologies, while calling for China to take the lead in the development of norms, standards, and regulations that could shape the future development of AI in ways that advance Party-state priorities. Through seeking greater influence in the global governance of AI, China may also achieve the leverage to promote its own preferences in international institutions, just as its advancement of “cyber sovereignty” has justified domestic censorship and undermined Internet freedom worldwide.

Through seeking greater influence in the global governance of AI, China may also achieve the leverage to promote its own preferences in international institutions, just as its advancement of “cyber sovereignty” has justified domestic censorship and undermined Internet freedom worldwide.

These authoritarian innovations made in China will not stay in China. Chinese tech companies are starting to market these technologies globally, selling facial recognition from Malaysia to Zimbabwe. These capabilities may have asymmetric implications, strengthening state coercive capabilities in ways that may render effective popular resistance ever more challenging. At the same time, the capacity of AI-enabled techniques pioneered to influence and manipulate human perceptions and psychology—whether in propaganda, advertising, campaigning, or adversarial influence operations—may bolster the resilience of able authoritarians, while raising core questions about free will and choice even in democracies. In the aggregate, these trends could result in backsliding in partly-free states and strengthen authoritarian regimes, while creating a real risk and threat of abuses of power even in democracies under rule-of-law systems.

For China, these externalities may be welcomed as part and parcel of the process of extending the reach of its own domestic system to reshape the global order. The world’s democracies have yet to recognize and reach consensus on the gravity of this challenge. To start, it is vital to condemn and take action to oppose egregious violations of human rights and crimes against humanity in Xinjiang that have been enabled and may be escalated through the employment of big data and AI technologies. U.S. policymakers are starting to consider the imposition of sanctions under the Global Magnitsky Human Rights Accountability Act that might be targeted against Chinese leaders and perhaps also companies who are clearly complicit in these abuses.

Going forward, activists and civil society concerned about the global implications for AI ethics and human rights can also look to mobilize against the abuse of these technologies for censorship and surveillance in China. Such a campaign might include boycotts of companies and universities that actively support the development of these capabilities for elements of China’s coercive apparatus, such as the Ministry of Public Security and the Ministry of State Security, that have a history of abusing them. The use of tactics for naming, shaming, and imposing pressure on Beijing may not be enough to reverse these trends, but the Chinese government has proven responsive to pressure from certain international campaigns in the past.

Looking ahead, democracies must also articulate a positive agenda for the future of AI, supporting and embracing a focus on “AI for good.” The advancement of favorable applications of AI can include its potential to enable sustainable development, enhance opportunities for education, and increase transparency in governance. In the process, democratic policymakers should engage with and bolster the role of a diverse and inclusive coalition of civil society stakeholders. The pursuit of public-private partnership also has a key role to play in promoting AI safety and standards. At the level of global governance, democracies must also advocate for a vision for AI that is consistent with liberal values and protections for human rights. The stakes are high, and a failure to take action decisively now could have historic consequences for the future of democratic governance.

 

Elsa B. Kania is an adjunct fellow with the Center for a New American Security’s Technology and National Security Program. Follow her on Twitter @EBKania.

The views expressed in this post represent the opinions and analysis of the author and do not necessarily reflect those of the National Endowment for Democracy or its staff.