In times of remote-control war

Humanitarian organisations are directly affected by the new forms of conflict which have been multiplying on the planet since the beginning of the 21st century (from Iraq to North and South Sudan, from Syria to Afghanistan, from the Sahel to Myanmar). The violence of war is currently characterised by a marked “extremisation”, radicalisation and re-ideologisation of warring factions, the blurring of the boundary between front and rear (the battlefield being everywhere and no longer specifically demarcated), the disappearance of the differentiation between civilians and combatants. The proliferation of atrocities, massacres and terror strategies which deliberately target civilian populations has become a matter of routine. Terrorist acts which intentionally target festive gatherings, places dedicated to the arts, transport systems or places of worship also contribute to this rise of extremism. The return to total war is already having major consequences on the international system of humanitarian aid. Yet the probability of the generalisation, on the short or medium term, of weapons from the “robolution”, which increasingly eliminate the human factor, adds an extra dimension or even a yawning gulf to the issue.

            As such, the fact that weapons systems prove deadly for combatants and even more so for civilians is nothing new, as the 20th century conclusively showed. However, the disruption[1]The term originated in Silicon Valley, as a synonym for essential upheavals. which these robot machines are likely to introduce stems foremost from the fact that they have enormous capacities which are constantly being honed by the techno-scientific revolution. Secondly, those which currently exist are still activated by human operators, often located several thousand kilometres from the battlefield, with only the viewpoint given by electronics and satellite sensors. This situation results in a radically different perception of the skirmishes of war than that which existed before. Finally, the more or less distant risk of an eruption of killer robots with autonomous and decisional capacities (over which humans would have only limited control, if any), is not insignificant. It is a cause of profound concern for many.

            Yet humanitarian actors have not yet taken the measure of these rapid developments. Progress in artificial intelligence, the power and capacity of algorithms often praised by the media or from within the generation are growing exponentially, including in the military field. Yet information on the subject remains scarce, especially since technical aspects are often prioritised, the majority of publications – generally English-language – favour them and the debate tends to be restricted to specialised circles, be they military or legal. We must therefore salute the publication of a major work in French, one of the first really dedicated to this issue and the in-depth analysis of tendencies and breaches which advanced robotics will produce in the conduct and unfolding of hostilities[2]The book also covers highly relevant developments relating to internal security. Nonetheless, this aspect is not directly linked to humanitarian issues, at least not at the international level. It … Continue reading as a new tool for coercion, especially through what are known as Lethal Autonomous Weapons Systems (LAWS).

            This collective work fills a gap and should therefore become a milestone. Published in 2015, it went relatively unnoticed, though it is acutely relevant. It deserves to be mentioned, especially since it was remarkably piloted by a trio of researchers and professors (Ronan Doaré, Didier Danet and Gérard de Boisboissel), all three of whom are attached to the Centre de recherche des écoles de Saint-Cyr Coëtquidan, where future officers of the French army are trained. Published by the Presses universitaires de Rennes, it also had the support of the Fondation Saint-Cyr. Its 260 pages – in spite of the classic differences in writing and style between the twenty-odd contributors (which here are limited) – make for easy reading, facilitating understanding of a complex subject. This can be attributed to the transversal approach taken, which mixes military and civilian experts on a multidisciplinary basis (p. 17). The diversity and richness of almost all of the contributions must also be highlighted.

            One of its essential qualities is to be found in its structure, which follows coherent categories, without being didactic. Almost all of the first half is devoted to putting military and police robots into perspective. This has the advantage of combining reflections on the “robotisation” of the battlefield, uses in matters of internal security and the use of armed drones. The general public discovered the latter through their systematic use by the Obama administration over the last few years against Al Qaeda and its affiliates. For Stéphane Taillat, who analyses it, it must nevertheless “be relativised inasmuch as it is a tool that is part of a particular tactic at the service of a strategy” (p. 62). It does not change the conduct of the war, but modifies the perception of it. The fact remains that the use of these robots has been developing, including within non-state armed groups, such as the Islamic State (ISIS), which handmakes them for use in Iraq and in Syria. Other authors – such as Jean-Baptiste Jeangène-Vilmer, Catherine Tessier or Gérard de Boisboissel – endeavour to emphasise the distinction between drones and LAWS, the stakes and uses of each seeming to belong to two very different categories.

            A second significant section describes the situation in substantive law, both internally (in France especially), and internationally, regarding the control of military and police robots. Humanitarian actors will retain Caroline Brandao’s article, which studies the challenges raised by military robotics for international humanitarian law. According to her, there is not, in reality, any “legal vacuum in the use of autonomous or semi-autonomous weapons” (p. 132). She adds, in conclusion, that new technologies do not alter existing laws but must on the contrary conform to them, even if it means elaborating complementary norms in the face of certain challenges. Whilst this approach is understandable coming from a legal expert associated with the Red Cross and Red Crescent Movement, it is permissible to question it and find it exaggeratedly optimistic, because it is institutional. We can mention that the International Committee of the Red Cross was one of the first to include the subject on its agenda[3]See for example the International Review of the Red Cross (IRRC), n° 886, été 2012, “New technologies and Warfare”, Cambridge University Press, … Continue reading. For their part, Dominik Gerhold and Marion Vironda-Dubray, both members of the Department of legal affairs of the French defence ministry, carried out an in-depth analysis of the military obligations and responsibilities in the face of the “robotisation” of the battle field.

            Finally, the third and last part expands on the question of control of robots, debating reform (qualified by the authors as “necessary”). Jean-Marie Fardeau, as Director of Human Rights Watch-France (at the time of going to press), hence argues for a preventive ban on completely autonomous weapons. He explains that HRW has spearheaded an international campaign to this end. For him, it is legitimate and justified for four reasons: the impossible respect of IHL, the absence of human emotions, the facilitation of war and the responsibility for damages caused, especially to civilians (p. 198 – 200). Whilst this initiative, which was launched in 2013, met with a rather large public response, it must be said that it has difficulty in mobilising sustainably in civil society, and, above all, that it seems in no way to have slowed down the development of LAWS… Didier Danet, in a very thorough article, strongly criticises this proposal and considers that wanting to ban killer robots is a “road to hell paved with good intentions” (p. 203). According to this author, a normative ad hoc regime is even less necessary given that its object – the killer robot – does not exist in reality, and that we must not confuse law and science fiction. The demonstration is rigorous, but seems somewhat peremptory because it takes as a given that existing systems will never evolve, and that it is therefore unnecessary to plan ahead.

            The book concludes with no less than three chapters dedicated to the ethical questions raised by robotisation. The comparative points of view of a philosopher and physicist (Dominique Lambert), a military chief engineer (Thierry Pichevin), and an American vice-admiral, ethics and public policy professor at the naval academy of the United States (George Lucas), are particularly stimulating on this level.

            The brief conclusion signed, once again, by Didier Danet, shows the extent to which this work is acutely relevant since, when it was nearly finished, several eminent Silicon Valley actors in California – the motor (with Japan) of the “robolution” – publicly warned against the potential of seeing war machines endowed with artificial intelligence gaining independence and turning against their creators. At the end of 2014/beginning of 2015, the public declarations of Elon Musk (head of Tesla and Space X), and Bill Gates (the founder of Microsoft) marked public opinion. Stephen Hawking, the famous British physicist and cosmologist, was even more radical in declaring to the BBC in December 2014 that “once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever increasing rate” (p. 257)… Whilst Danet tempers these worrying statements, considering that we are “very far from developing a truly autonomous artificial intelligence and the risk of seeing a military robot emancipate itself from those operating it is nil” (p. 258)[4]This is his position which is constantly restated in the book, as we have seen. See also his column « Terminator est déjà encadré par les lois », Libération, November 2nd 2015., he nevertheless points out the importance of discussing the issue from different angles. For him, the central question is the following: how can we benefit from scientific progress without the development of artificial intelligence in terms of armed coercion provoking the dilution of operational and political responsibilities on the long term (p. 259)?

            Due to the professional positioning of the three coordinators, and of certain authors, the balance between military and civilian expertise sometimes seems to swing in favour of the former. The debate is nevertheless unquestionably important and will no doubt grow in years to come. It would be advisable for the humanitarian field to take an eminent position in this debate, in its different dimensions. Not only the legal dimension, but also the potentialities and risks of robotics, artificial intelligence and even transhumanism. Several humanitarian agencies have already integrated the robolution in their daily practice, with big data and the use of drones. But ultimately, and probably soon, the production of material on the ground with 3D printers, the use of integrally automated vehicles or even humanoid robots[5]Japanese companies are seeking to develop “humanitarian robots”. in aid distribution operations, and finally the use of augmented human capacities or transhumanism will imply incalculable impacts which have not been sufficiently thought out or discussed. However it would be absurd and even dangerous to consider these technological aspects alone, without, in parallel, comparing them with debates on autonomous weapons systems.

            It is certainly not its objective, but if the reading of this book allowed for a reciprocal enrichment of reflection between the humanitarian sphere and that of security and defence, or even common research initiatives - in favour of the protection of civilian populations in conflict situations - it would add an extra level to an already already rich palette.

Translated from the French by Juliet Powys

 

Support Humanitarian Alternatives

Was this article useful and did you like it? Support our publication!

All of the publications on this site are freely accessible because our work is made possible in large part by the generosity of a group of financial partners. However, any additional support from our readers is greatly appreciated! It should enable us to further innovate, deepen the review’s content, expand its outreach, and provide the entire humanitarian sector with a bilingual international publication that addresses major humanitarian issues from an independent and quality-conscious standpoint. You can support our work by subscribing to the printed review, purchasing single issues or making a donation. We hope to see you on our online store! To support us with other actions and keep our research and debate community in great shape, click here!

References

References
1 The term originated in Silicon Valley, as a synonym for essential upheavals.
2 The book also covers highly relevant developments relating to internal security. Nonetheless, this aspect is not directly linked to humanitarian issues, at least not at the international level. It will therefore not be mentioned here.
3 See for example the International Review of the Red Cross (IRRC), n° 886, été 2012, “New technologies and Warfare”, Cambridge University Press, www.icrc.org/eng/resources/international-review/review-886-new-technologies-warfare/review-886-all.pdf
4 This is his position which is constantly restated in the book, as we have seen. See also his column « Terminator est déjà encadré par les lois », Libération, November 2nd 2015.
5 Japanese companies are seeking to develop “humanitarian robots”.

You cannot copy content of this page