Ethics and autonomous weapon systems: the ICRC’s perspective

T-Hawk remotely piloted air system in Afghanistan T-Hawk remotely piloted air system in Afghanistan © U.S Army/ICRC

Following a round-table meeting in August 2017, the ICRC has published a report stating its conclusions about the relationship between ethics and autonomous weapon systems

The meeting focused on the ethical dimensions of the requirement for human control over autonomous weapons and the related use of force. The meeting sought to assess whether the principle of humanity and the dictates of public conscience, found in the Martens Clause of the Hague Conventions and under customary international law, could allow autonomous weapon systems (AWS) to take operational autonomous decisions and have life-and-death powers. Considering the legitimacy of these two ethical principles, humanitarian agencies, as main counter-arguments against AWS, have stressed the importance of loss of human agency, loss of human dignity and erosion of moral responsibility.

Regarding the first aspect, human agency, the ICRC believes that there must be a minimum level of human control over the use of AWS. Such control could be achieved through several measures, for example human supervision and opportunities for intervention, imposition of operational checks, and technical requirements for better predictability and reliability. Most importantly, to preserve human agency, there must be a direct connection between the human intention and the consequences of the resulting attack.

Secondly, regarding the element of human dignity, the ICRC argues that not only does it matter if people are killed, but also how they are killed or injured, highlighting the decision-making process in addition to the results of the operation. These two aspects are strictly interconnected as the presence of human agency in these contexts is necessary to maintain human dignity.

The necessity of human agency is equally linked to moral responsibility and accountability in implementing attacks. The removal of human intent from a specific attack weakens moral responsibility by preventing further consideration of humanity. Therefore, while the machine may provide a causal explanation, it does not provide a reason, an ethical justification. Moreover, without transparency in the human-machine interaction, it is not possible to assign legal responsibility to a military commander or operator for the consequences of the attack.

Predictability, reliability and risk are also key considerations for legal assessment of AWS. Predictability and reliability connect human agency and intent to the eventual outcome and effects, and contribute to the level of risk of collateral damage. AWS can lack these requirements for multiple reasons which raise consistent ethical concerns due to lesser foreseeability of consequences and associated risks. The issue of predictability is also related to the presence of uncertainty. Factors such as constraints in time and space, and in operational environment, tasks and targets raise significant ethical issues, further complicated by new and constantly evolving applications of artificial intelligence and machine learning.

The ICRC encourages better consideration of an ethical basis for “meaningful”, “effective” or “appropriate” human control over AWS, to “ensure compliance with international legal obligations that govern the use of force in armed conflict and in peacetime”. This type of human control, which may only be achieved through constraints on the autonomy of AWS, is required to safeguard human agency and dignity, to ensure accountability, and to assign moral responsibility.

 

For more information, please visit:

https://www.icrc.org/en/document/ethics-and-autonomous-weapon-systems-ethical-basis-human-control

Read 825 times