How would international human rights law deal with a potentially automized future?

Author: Eduardo Kapapelo
Centre for Human Rights, University of Pretoria

Introduction

In a scene from Jonathan Mostow’s Terminator 3: Rise of the Machines, the ‘Terminator’ played by Arnold Schwarzenegger says, ‘Cybernet has become self-aware’. While the context of such words are within a scripted science fiction world, they nevertheless seem to be echoes of a futures we seem to be writing – whether willingly or not.

While Mostow’s ‘killer robots’ or ‘terminators’ –  are essentially autonomous weapons systems sent through time to kill a person seems farfetched and squarely within the realm of science fiction, perhaps it is not life imitating art, but art imitating life. The United States Future Combat System Project which aimed to manufacture a ‘robot army’ seems to have hinted that the future might not be as fictitious as we think.

In a report submitted to the Human Rights Council Christof Heyns advised that, lethal autonomous robotics are weapons that once activated, can select and engage targets without further human intervention’. The question in this regard is then, to what extent would these systems be programmed to comply with international human rights and humanitarian law?

The questions bring to the fold new questions as they relate to the legal and moral content of law. In addition, such weapons do not only turn warfare on its head but also create the frameworks for which the rules, procedures and controls which manage warfare are rendered mute.

The focus of this contribution is to interrogate not necessarily the battle field where these weapons are envisioned to be, but more specifically the ease in which these systems could be deployed in civilian populations – used for law enforcement and control, and in particular within states that disregard human rights and use force to curb dissent. In addition, how would these systems function together with already intrusive mass surveillance programs which raise eye-brows within the context of human rights – and more importantly, and perhaps diving back into the realm of science fiction, what happen if such systems become self-aware?

What would a ‘self-aware’ autonomous weapons systems look like, if such weapons are those which when activated can select and engage targets at will – would such be a form of self-awareness? Would advances in machine learning and artificial intelligence develop their ability to become self-aware? While these questions and their reality seem to be improbable, they nevertheless touch upon the human need and ability to build and explore, and for good or bad at the very core of human curiosity – a curiosity to push the boundaries of both science and technology.

autonomous_weapons

War by other means

With the advent of new technologies and weapons systems we seem to be entering a new phase of human conflict, making mass surveillance, artificial intelligence and autonomous weapons systems game-changers in the war against authoritarianism, state violence and repression. Despite the ethical application of their use in the traditional battlefield they bring to civilian and everyday life new questions with new and potential violations to the human person.

The use of force by governments against their citizens has been well documented around the world. From Angola to the United States of America and to the Russia Federation, conventional warfare, or perhaps repression is evolving – and so are the tools of which such violence is conducted. The question then becomes, what happens if and when autonomous weapons systems are used for law enforcement? What happens if, and based on their artificial intelligence, these machines act/react and ‘autonomously’ kills civilians without necessarily any interface from a human being?

Indeed, and while transformations in technology are transforming human society into one of robots and machines and while such changes might seem to be exciting and advanced and with it propelling us into a future of ease and wonder – they are also dangerous. As human intelligence is marked by intrinsic bias in decision-making, such characteristics can also be found in AI products that work with human created intelligence. For example AI algorithms and face recognition systems have to a large extent failed to ensure basic equality standards by showing discriminatory tendencies towards people of African descent.

The development of mass surveillance systems – again developed under the pretext of protecting individuals (either true or false) has created a situation of ‘severe tension and incompatibility between the right to privacy and the extensive data pooling on which the digital economy is based’.  The use of social media like Facebook, Instagram and other platforms has over the past decades allowed governments to collect data on individuals about almost every aspect of their lives. This is especially so as governments continue to invest in more ‘sophisticated technology to monitor their citizens behavior on social media’.

In addition, mass surveillance aspects such as targeted surveillance, bugging and other methods are being used and justified as the only way to combat the highly complex and intricate phenomena of terrorism while also playing a crucial role in crime prevention, the marriage of mass surveillance autonomous weapon’s systems and algorithms with the power to suggest whether an F-22 should use a hell fire missile or a raptor drone to destroy a target has direct implications to those directly in the area of incursion and when we think of adding aspects of AI such as machine learning into these systems what would the results be? Heyns has clearly and directly pondered such circumstance in stating that such a reality raises the possibility that ‘computers will determine whether people will live or die’.

While speaking of autonomous weapon’s, mass surveillance and algorithms with the power to suggest courses of action might be disjoined, and while the use of such technologies might, and at least for now be thought of only being deployed in battlefields far away, their use and deployment by states internally to quell dissent might not seem so farfetched and may in fact constitute a clear and present danger – not only for the average individual, but also directly challenge the regime of international protection of human rights.

Such mass surveillance, while more likely to have happened under intelligence agencies like the CIA, MI6 and others,  intelligence gathering is now being carried out by states big and small, ‘democratic’ and ‘authoritarian’. The combination of these technologies then poses a question to the future of personal autonomy and liberty and more importantly – security, and the extent to which governments are really in control and how in an attempt for total control and vigilance we might lose it completely.

In addition, and while autonomous weapons systems seem to be at the point where researchers and rights groups are focusing their attention on, another important aspect to ponder on is perhaps the algorithm of these weapons and how they would function within such autonomous weapons systems. Would these autonomous systems be programmed to follow a particular reasoning and act accordingly based on such basis? Could these weapons or algorithms be programmed on biased information so as to target particular groups like members of the LGBTI community, or so called ‘undesirables’ that an authoritarian or non-democratic government wishes to dispose of based on information that we all so willingly and freely provide on social media and data gathered through mass surveillance systems?

In conclusion, and perhaps something which might also be part of a science fiction future which we might bring to fruition, is the real possible question of these weapons systems becoming self-aware, how would an autonomous weapon system equipped with the capabilities to carry out mass surveillance on a population with the algorithm or artificial intelligence to learn and adapt become self-aware? How would law respond to these questions, how would humanity respond and what would it mean to the social and moral fabric of our lives?

About the Author:

Dr. Eduardo Kapapelo is a Programme Manager at the Centre for Human Rights and researcher in the field of political sciences, international law and human rights. He is working towards better understanding how governments can design institutions and mechanism for violence prevention. His expertise includes project design and management, policy analysis and implementation. He is currently researching new and emerging technologies such as artificial intelligence, autonomous weapons systems and their relationship to human rights. Specific fields of interest include policy analysis, human rights and structured vulnerabilities and post conflict justice and reconciliation.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s