Media/Information/Technology

An Intelligent Human Rights Agenda for Artificial Intelligence

By Eileen Donahoe and Megan MacDuffee Metzger

Around the world, concern about the consequences of our growing reliance upon artificial intelligence (AI) is rising. Perhaps the darkest concerns relate to development of AI by authoritarian regimes, some of which are devoting massive resources toward applying AI in the service of authoritarian, rather than democratic or humanitarian, goals.

At the same time, in free societies we have barely begun to come to grips with what our growing reliance on machine-made decisions in so many areas of life will mean for human agency, democratic accountability, and the enjoyment of human rights. On the one hand, there are questions of how to understand whether applications of AI are ethical, who should make such judgments, and on what basis. On the other hand, even when AI serves worthy ends, it often has unintended negative effects. For example, biases embedded in data used to train AI can become reified and magnified under the guise of objectivity. This is especially troubling when AI is used in the public sector, for example in criminal sentencing and parole decisions.

Many governments have been formulating national AI strategies to keep from being left behind by the AI revolution, but few have grappled seriously with AI’s implications for their accountability or duty to protect citizens’ rights

Many governments have been formulating national AI strategies to keep from being left behind by the AI revolution, but few have grappled seriously with AI’s implications for their accountability or duty to protect citizens’ rights. And while some private companies (Google and Microsoft among them) have set forth their own ethical principles for developing and applying AI—and individual technologists, academics, and civil society actors are working to develop universal guidelines—none of these initiatives offer a comprehensive framework for addressing these risks, nor can any claim anything approaching widespread buy-in. A shared global framework is needed to ensure that AI is developed and applied in ways that respect human dignity, democratic accountability, and the bedrock principles of free societies. We argue that the Universal Declaration of Human Rights (UDHR), along with the series of international treaties that explicate the wide range of civil, political, economic, social, and cultural rights it envisions, already has wide global legitimacy and is well suited to serve this function for several reasons.

First, it would put the human person at the center of any assessment of AI and make AI’s impact on humans the focal point of governance. Second, this international body of human-rights law, through its broad spectrum of both substantive and procedural rights, speaks directly to the most pressing societal concerns about AI. For example, the right to equal protection and nondiscrimination speaks to concerns about avoiding bias in data and ensuring fairness in machine-based decisions relied upon by governments, and the right to privacy addresses the fundamental concern about loss of privacy in data-driven societies and the need to protect personally identifiable data.

A shared global framework is needed to ensure that AI is developed and applied in ways that respect human dignity, democratic accountability, and the bedrock principles of free societies.

Third, the human-rights framework establishes the roles and responsibilities of both governments and the private sector in protecting and respecting human rights and in remedying violations of them. Within the UN Guiding Principles on Business and Human Rights, the general legal obligation to protect human rights remains with states, while private firms have responsibilities to respect and protect human rights (and to remedy violations of them) when the firm’s own products, services, and operations are involved.

Finally, although interpreted and implemented in vastly different ways around the world, the existing universal framework enjoys a level of geopolitical recognition and status under international law that any newly emergent ethical framework is unlikely to match. Countries that do not comply with these norms risk censure from the international community. This does not mean that all states fully embrace these principles as guiding norms, or apply them perfectly. But it is to say that human-rights standards enjoy a high level of legitimacy, and this is a crucial advantage.

The next step is to articulate more clearly how to implement human-rights principles in all sectors of an AI-driven world. Several practical ideas have emerged in recent years, and one that is now drawing attention is the concept of “human rights by design.” This asks companies to reflect on how a new technology will affect the human rights of its users as the technology is being developed, instead of after it is deployed. This includes teaching young technologists about existing human-rights norms so that those who seek to build ethical AI applications need never feel that they are operating in a vacuum.

While authoritarian governments might like to shift global support away from human-rights norms in the digital context, democratically-aligned governments should vehemently resist such a shift.

Given assertive efforts by authoritarian regimes to reshape international norms, it is unlikely any new global governance framework for AI would encompass the full spectrum of commitments to human dignity that the existing human-rights framework contains. While authoritarian governments might like to shift global support away from human-rights norms in the digital context, democratically-aligned governments should vehemently resist such a shift. The human-rights framework that we already have is well suited to the global digital environment. As applications of AI proliferate, so must practical ways of bringing human-rights standards to bear. Our urgent task is to figure out how to protect and realize human rights in our new AI-driven world.

 

This post is drawn from a longer article, titled “Artificial Intelligence and Human Rights,” that appears in the April 2019 issue of the Journal of Democracy.

Eileen Donahoe, former U.S. ambassador to the UN Human Rights Council in Geneva, is executive director of the Global Digital Policy Incubator and adjunct professor at Stanford University’s Center on Democracy, Development, and the Rule of Law. Megan MacDuffee Metzger is research scholar and associate director for research at the Global Policy Incubator. Follow them on Twitter @EileenDonahoe and @meganicka.

The views expressed in this post represent the opinions and analysis of the author and do not necessarily reflect those of the National Endowment for Democracy or its staff.

 

Image Credit: Tatiana Shepeleva / Shutterstock 

Comments