News

Autonomous killer robots and ethics

Peace & Security16 Aug 2010Kees Homan

The targeted killing of insurgents from Al-Qaeda and the Taliban with killer robots (unmanned combat aerial vehicles) has become a new US focus in Afghanistan. Opponents rightly say that the art of war may become the art of political assassination or summary execution. Luckily, killer robots in the sense of lethal autonomous military robots still do not exist. Nowadays, the killer robots that do exist are remote-controlled machines, where humans remain in the loop, at least wherever the use of force is involved. Current robots have no brains to speak of and are still dependent on human operators for carrying out their functions, which are mainly reconnaissance, explosive ordnance disposal, logistics (mainly warehouse robots) and base security, and recently killing insurgents.

It is clear why some politicians and military are enthusiastic about robotics. War is expensive and bloody, and it produces casualties. Robots do not get tired, suffer post-traumatic stress, or need to be fed or counselled. However, proponents also find that when robots are to be optimally employed, they must be allowed to make their own decisions. The increasing deployment of armed, unmanned aerial vehicles is a new step on this dangerous path. The current revolution in military robotics and Artificial Intelligence (AI) goes hand in hand, and will in part be enabled by the nanotechnology revolution.

Weapons developers and high-ranking officers feel confident that the technology for truly autonomous weapons will be available in the medium term (after 2025). In attempting to accommodate opposition, the US army is funding a project of Ronald Arkin to equip robot soldiers with a conscience, to give them the ability to make ethical decisions. At the Georgia Institute of Technology, United States, Arkin is developing a set of rules of engagement for battlefield robots to ensure that they use lethal force in a way that follows the rules of ethics. Opponents of Arkin say that machines could not discriminate reliably between buses carrying enemy soldiers or schoolchildren, let alone be ethical. They find claims that an AI system can discriminate between a combatant and an innocent are unsupportable and irresponsible.

Unmanned killer robots may be, from a technical and practical standpoint, highly fascinating. However, the moral and ethical aspects of such weaponry are much more important. Killer robots employ information if it is synonymous with knowledge. But without a moral context, we are going to live in a world without meaning, in which taking the life of another person would be no more wrong than unplugging a computer for good.

Killer robots are also incompatible with the military ethos that is still based on the ideal of chivalry. Chivalrous conduct in war is not to kill the enemy at long range with zero risk, but is based on the willingness to fight fairly and to risk as much as the opponent, namely your own life. Only if lives are at stake will there be effective deterrents to the use of force.

In conclusion, the whole idea of automated killing is perverse. An action so serious in its consequences should not be left to mindless machines. Machines will never absolve mankind from its responsibility to make ethical decisions in peace and war. As the prices of robots falls and technology becomes easier, a robotic arms race can be expected, one that will be difficult to stop. It is of utmost importance that international legislation and a code of ethics for autonomous robots at war are developed before it is too late.