What are the implications of new and emerging technologies, including AI, on disarmament and human control?
Part of the argument for regulation/prohibitions of Lethal Autonomous Weapons Systems (LAWS) is that that it is morally objectionable for machines to make life and death decisions in war. Do you agree or disagree, and why?
It comes down to the concept of collateral damage. We already know that to a certain degree machines have been making life and death decisions in war in areas where there are no civilians. But when it comes to the grayer areas of operating around civilians, making assessments of possible collateral damage, or what is an appropriate and proportional use of force, then I do believe that is morally objectionable if there is no chain of responsibility.
I would not necessarily say that it is about the machines per se as it is about abrogating the responsibility of a human or humans in making those decisions. It is easy to imagine a world where, when fatalities or excess civilian casualties occur, everyone just puts up their hand and says, it is just code, and we cannot do anything about it. It was the fault of the machine; therefore, nobody is at fault. But this goes against a lot of historical precedents and foundations of our society that people should be held accountable for their decisions.
Something that has come to my mind in the last few years is that this discussion of LAWS is also part of a broader discussion about when people are okay with a robot using some sort of force or agency on them. This leads to the more general question about the type of force that is being used. Lethal force is the ultimate degree of force, but there is a whole spectrum of other possible forces. There is less than lethal force. There is gently pushing someone. There is collecting data about someone. There is simply existing in someone’s personal space.
It also comes down to what is the dynamic between the person deploying the robot and the person having that robot deployed on them. These two fundamental factors affect how a given autonomous system will be accepted. A civilian confronted with a faceless military machine that is employing lethal force is the ultimate example of a hostile dynamic. Conversely, a neighbor using a drone to inspect their roof around their own neighbors is likely to be considered acceptable. There are intermediate scenarios, too. For instance, the military using a drone for disaster response or search and rescue is likely to be accepted. Police using a drone to surveil an underrepresented population in their area would likely be less accepted. We have already seen examples of all these situations play out.
Much of the discussion about meaningful human control of LAWS has centered around retaining a human in the loop versus on the loop. How do you understand the distinction between humans in the loop, humans on the loop, and the concept of meaningful human control?
The way that I look at humans in the loop is that in the end the humans decide. They make the go or no–go decision, or they make the decision to employ a weapon or not; whereas, a human on the loop is merely observing things occurring. But at least they are observing something occurring. Obviously, there are certain nuances, for instance, regarding the question of how deep the human is in the loop. Is the human in the loop in the sense that they have approved a weapons system to be used? Or are they observing or approving every single action of that weapon system? For example, in automated weapons systems such as short–range point–defense, such as close–in weapon systems (CIWS) like Phalanx, or anti–missile systems like Iron Dome, humans are in the loop, because they enable and disable these systems. They are observing — on the loop — what is going on, but they are not in the loop in a way that they are controlling every single shell or missile that is being fired.
This leads us to the concept of meaningful human control. You could say that if there is a human in the loop, who is ready to interrupt the weapon at any given point, then the weapon is technically under ‘human control’. However, this is not meaningful human control. If there is a human in the loop or on the loop, they need to be involved in such a way that it is accommodating the various limitations of humans in terms of reaction times, attention spans, and a range of other different factors.
To use an example that does not relate to weapons: there was an incident some years back with a safety driver for a self–driving car that was deployed on a public road. The safety driver was just expected to sit there and monitor for hours on end, intervening if necessary. The driver was not paying attention at an appropriate time and the car hit and killed someone crossing the road. That was not meaningful human control. It is impossible for a person to pay attention for this long in those conditions, and mitigating controls should have been put into place.
Do you personally assess the current discussions about the principle of meaningful human control? Is that going in the right direction in your opinion?
In my opinion, not really. Since I became involved in these questions in 2014, there has always been a consensus that these conversations need to happen. But the concept of meaningful human control requires a lot more collaboration. It requires discussions on what is meaningful human control for an anti–missile system such as Iron Dome or Phalanx. It requires discussions about meaningful human control for the growing amount of anti–drone systems that are likely to be deployed. And it requires discussions about the advancing space of armed ground based mobile robots.
There are all sorts of different discussions necessary, because, as I noted, it is not just that you have approved a single weapon to be used or not. It is also about in what space? How much authority does that weapon have? How much force are you allowing it to use? What is the area you are allowing it to deploy force within? For how long are you allowing it to operate? All of these aspects are part of that meaningful human control discussion.
These conversations need to be had. It requires more than just political discussions. It requires deep collaboration among people with real world military experience, with engineers and scientists, with people who have a background in what we would now call AI risk, with civil society, and with policy makers.
There is a consensus that new and emerging technologies will increase both the volume of information available to military decision makers and the speed of warfare and, therefore, dramatically compress the time available for military officials to assess information and make real–time battlefield decisions. Will these trends create inevitable and irresistible pressures to take humans out of the loop in your opinion?
There will always be pressures to do so, but it is up to all of us to decide how we want to conduct war. I am not the kind of person to say that war can be outlawed or anything like that. We have got a lot of human history that makes it clear that armed conflict is not going away, but, at the same time, we have found ways to make it a little bit more civil over the years.
It is also up to us to decide what kind of new technology to deploy. Yes, military officials do want the ability to collect and assess information quickly from multiple sources. Yet I would not imagine that military officials want the systems to just react without considering their own military experience.
I do not have direct military experience myself, I am a roboticist. But having spoken to a great many people about this topic who do, there is universal agreement that it does not take much to teach someone how to employ a weapon, you can teach eighteen– and nineteen–year–olds how to do that. But that is no substitute for decades of military experience with using weapons in a context of how and where, particularly when civilians are involved.
It seems likely that AI and other technologies will enable the spread of autonomous technologies both in the civilian and military spheres so that they soon become pervasive throughout society. Given this, are traditional treaty–based strategies of arms control and disarmament really feasible approaches to addressing the military implications of AI? Will we not need instead a more all–encompassing understanding of disarmament that attempts to address the drivers of military expenditure, conflict, and militarism?
That almost sounds like we are saying that we just cannot fight any more. I think the currently ongoing conflict in Ukraine is a very clear indication that the prospect of technological armed conflict is not something we can simply ignore. There is still a lot of tension out in the world. It is very unrealistic and naïve to say that we can just disarm and that that will lead to reduced military expenditure, reduced conflict, and reduced militarism.
But it is true that advanced and autonomous technologies are spreading and are being developed faster and faster. Treaty–based strategies are only one aspect of reigning in these developments. There are other aspects such as state–level regulations, norms, and industry discussions.
For example, the emergence of large language models has resulted in an incredibly fast regulatory process. This proves that, when governments believe a risk exists, they can move quickly. Globally, every major country has developed or is in the process of developing an AI act. Are they perfect? Absolutely not. But it is proof that regulations can move fast. I do hope that members of governments do reflect a bit on the fact that far less sophisticated models put in full control of weaponized robots pose the potential for great harm.
What are the effects of the Ukraine war? Many of the skeptics of LAWS now all of a sudden feel, such weapons might work in favor of the Ukrainians. We see quite some momentum to rush into this technological field rather than thinking about what needs to be the regulation or the curbing of it.
It was only about a month or two after the most recent Ukraine conflict broke out in 2022, that members from the private sector, government, and academia who have been involved in these discussions in the past started having various conversations about the conflict. Can we afford to sit in North America and say that countries should not be able to develop technologies to defend themselves? What is the reasonable thing to do? For instance, what should be done against a hypothetical swarm of drones that is designed to target dismounted infantry? The best defense against that is probably not another swarm of drones designed to target dismounted infantry, but probably a system that targets drones itself. When the cannon was developed, the answer was not to build other cannons that shoot the cannonballs out of the air. However, fortifications were built differently from there on out.
What is becoming increasingly clear in the Ukrainian conflict is the degree of capability that modern, low–cost robotics can have. But what we are seeing is more about low–cost and high–volume manufacturing and innovation than it is about the true use of autonomous weapons systems. Those drones are not doing anything particularly fancy. Moreover, the Ukrainians are defending their own soil and that is not the place to let a drone operate that is fully capable of using lethal force autonomously. If anything, there is a stronger impetus to make sure that there is meaningful human control being enforced.
You come from the private sector, in your opinion what role can private–sector actors actually play in curbing the development of LAWS?
The most important thing that we can do in the private sector is to really educate. To educate on the capabilities, on the risks, on what is considered doable, on what is controllable. And educate on the potential benefits or harms of the dual–use nature of the foundational technologies. A good example occurred in 2015 when a few experts, including myself, were at the United Nations in Geneva and brought up the very early incidences of adversarial attacks on convolutional neural networks. Nowadays, with the potential to jailbreak large language models, the problems have become much more complex. It has become clear that, much like with more basic approaches to deep learning, there are tradeoffs between robustness, predictability, and flexibility, which need to be navigated. These engineering topics are issues that the private sector is uniquely suited to address.
[1] The views expressed in this interview do not reflect the official policy or position of Rockwell Automation.
This interview is part of the edited volume “Rethinking Disarmament in an Age of Militarism: Crisis, Opportunity, and Contending Solutions” The Disarmament Collective (Editors) Lynne Rienner Publishers, Forthcoming. We thank Lynne Rienner for the prior release of this interview.
747 Third Avenue, Suite 34D New York, NY 10017
+1 (212) 687-0208
info.newyork(at)fes.de
This site uses third-party website tracking technologies to provide and continually improve our services, and to display advertisements according to users' interests. I agree and may revoke or change my consent at any time with effect for the future.
These technologies are required to activate the core functionality of the website.
This is an self hosted web analytics platform.
Data Purposes
This list represents the purposes of the data collection and processing.
Technologies Used
Data Collected
This list represents all (personal) data that is collected by or through the use of this service.
Legal Basis
In the following the required legal basis for the processing of data is listed.
Retention Period
The retention period is the time span the collected data is saved for the processing purposes. The data needs to be deleted as soon as it is no longer needed for the stated processing purposes.
The data will be deleted as soon as they are no longer needed for the processing purposes.
These technologies enable us to analyse the use of the website in order to measure and improve performance.
This is a video player service.
Processing Company
Google Ireland Limited
Google Building Gordon House, 4 Barrow St, Dublin, D04 E5W5, Ireland
Location of Processing
European Union
Data Recipients
Data Protection Officer of Processing Company
Below you can find the email address of the data protection officer of the processing company.
https://support.google.com/policies/contact/general_privacy_form
Transfer to Third Countries
This service may forward the collected data to a different country. Please note that this service might transfer the data to a country without the required data protection standards. If the data is transferred to the USA, there is a risk that your data can be processed by US authorities, for control and surveillance measures, possibly without legal remedies. Below you can find a list of countries to which the data is being transferred. For more information regarding safeguards please refer to the website provider’s privacy policy or contact the website provider directly.
Worldwide
Click here to read the privacy policy of the data processor
https://policies.google.com/privacy?hl=en
Click here to opt out from this processor across all domains
https://safety.google/privacy/privacy-controls/
Click here to read the cookie policy of the data processor
https://policies.google.com/technologies/cookies?hl=en
Storage Information
Below you can see the longest potential duration for storage on a device, as set when using the cookie method of storage and if there are any other methods used.
This service uses different means of storing information on a user’s device as listed below.
This cookie stores your preferences and other information, in particular preferred language, how many search results you wish to be shown on your page, and whether or not you wish to have Google’s SafeSearch filter turned on.
This cookie measures your bandwidth to determine whether you get the new player interface or the old.
This cookie increments the views counter on the YouTube video.
This is set on pages with embedded YouTube video.
This is a service for displaying video content.
Vimeo LLC
555 West 18th Street, New York, New York 10011, United States of America
United States of America
Privacy(at)vimeo.com
https://vimeo.com/privacy
https://vimeo.com/cookie_policy
This cookie is used in conjunction with a video player. If the visitor is interrupted while viewing video content, the cookie remembers where to start the video when the visitor reloads the video.
An indicator of if the visitor has ever logged in.
Registers a unique ID that is used by Vimeo.
Saves the user's preferences when playing embedded videos from Vimeo.
Set after a user's first upload.
This is an integrated map service.
Gordon House, 4 Barrow St, Dublin 4, Ireland
https://support.google.com/policies/troubleshooter/7575787?hl=en
United States of America,Singapore,Taiwan,Chile
http://www.google.com/intl/de/policies/privacy/