Skip to content

Controlling Autonomous Systems in the Security Sector

Controlling Autonomous Systems in the Security Sector Article Image

16. November 2022

Due to the use of artificial intelligence (AI), machines are being increasingly involved in decision-making processes. Thereby a key question is how to ensure human control, so that increasingly more autonomous systems can reliably be deployed by security forces. This question was addressed in the workshop held on 2 and 3 November 2022 under the management of the Digital Society Initiative (DSI) of the University of Zurich in cooperation with the Swiss Drone and Robotics Centre of the DDPS (SDRC DDPS), armasuisse Science and Technology. The presentations revealed the latest research findings on ethical and legal aspects of the control of autonomous systems in the security sector.

Noemi Schenk and Pascal Vörös, Swiss Drone and Robotics Centre of the DDPS, armasuisse Science and Technology

On the first day, results were presented from the research projects of the Swiss Drone and Robotics Centre of the DDPS (SDRC DDPS) and the National Research Programme 77 “Digital Transformation”. In addition to this, the international keynote speakers Dr. Linda Eggert (University of Oxford) and Dr. Giacomo Persi Paoli (United Nations Institute for Disarmament Research of UNO; UNIDIR) presented their findings. Around 30 participants from the armed forces, armasuisse, the Federal Administration and research took part in the discussion.

After a short introduction to the workshop by Prof. Dr. Abraham Bernstein (University of Zurich, DSI) and Pascal Vörös (SDRC DDPS), it was, PD Dr. Markus Christen (University of Zurich, DSI) who presented two studies based on surveys. By involving AI (artificial intelligence) in security decisions, there is an expectation that better decisions can be made. This increases confidence in the capability of AI systems. At the same time, both the moral responsibility as well as the burden of responsibility remains with human beings, regardless of whether the human or the AI is acting operatively. There is thus a danger that human beings could become “moral scapegoats” for mistakes that have been made by AI systems.

Following this, Prof. Dr. Abraham Bernstein (University of Zurich, DSI) introduced the audience to the interaction between human beings and AI. Humans are perceived as morally more reliable than AI systems, but considered to be less capable. Furthermore, the first impression of an AI strongly influences its assessment by human beings, so that it takes a long time to make up for initial loss of confidence.

Dr. Serhiy Kandul (University of Zurich) then showed a snag in the control of AI systems, using an experiment based on a computer game. In the experiment, the participants had to predict whether an AI or a human being would successfully master a task (virtual landing of a lunar module) under the same conditions. Gaming success with human game control was clearly more predictable than success when controlled by the AI. However, the participants were not aware that there was such a difference.

Prof. Dr. Thomas Burri (University of St. Gallen, HSG) presented eight recommendations for the ethical and legal assessment of robotic systems interacting with humans in the security sector. In summary, these recommendations illustrate that due to the rapid development of technology and the slower adaptation of legal norms, a practical, pragmatic and dynamic form of applied ethics should be used.

There followed a glimpse into the future, where Dr. Samuel Huber (forventis) demonstrated which factors are important for successful man-machine teams, such as understanding of goals, reliability and communication. Using a simulation, he addressed how a possible, intuitive language communication can be built up.

Daniel Trusilo (University of St. Gallen, HSG), who joined online from the US, presented practical application examples for the ethical assessment of autonomous systems. The number of ethical principles applicable to AI is on the rise internationally. The same applies for the challenges in this area. It is thus not yet clear, for example, how the examination of autonomous systems along these principles or ethical interoperability can be usefully implemented.

The keynote speakers Dr. Linda Eggert (University of Oxford) and Dr. Giacomo Persi Paoli (UNIDIR) reported on their current studies with a view to AI and its potential impact and as a conclusion dealt with questions from the participants in a panel discussion. Eggert focused in her talk on the term “Meaningful Human Control”. She showed that three main concerns are raised against autonomous weapons systems: first of all, that these systems cannot comply with requirements, in particular international humanitarian law; secondly, that they violate the dignity of human beings and thirdly that responsibility gaps arise. The general tenor is that these concerns could be addressed by the concept of meaningful human control. However, Dr. Eggert critically questioned this in her statements and cast doubt on it.

Dr. Persi Paoli, for his part, addressed the progress and obstacles of the discussions within the United Nations and the role of the respective group of experts on autonomous weapons systems. This supports international efforts for safeguarding the responsible development and utilisation of AI. In 2019, an important input was introduced with the publication of 11 guiding principles. A multilateral approach for controlling AI serves the cause of international peace and security.

On the second day, an internal workshop was held with a small group of participants from the previous day. In a first interactive phase, the participants exchanged input and interests, from which possible further research topics were identified in a second phase.