Which decisions, activities, and processes across the life-cycle of a weapons system would collectively
contribute towards and enable appropriate human-machine interaction/human control/human judgment?
What would be the interaction between these various decisions, activities, and processes and how would they
vary based on the operational context and the characteristics and capabilities of the weapons system?
To start with, States should assess, through legal reviews whether there is a risk of uncompliance with IHL by the deployment and use of new weapons systems. During this R&D process, systems should be conceived in a way that enables relevant human control, providing operators with a sufficient amount of human understanding in order to achieve adequate awareness of the situation and understand the reasons why the machine is suggesting or going to take a specific course of action. During deployment and use, human control should be implemented by determining a doctrine of uses that established potential operational modes, identification of exclusive control privileges for human operators (such as human approval for any substantial modification of the mission and the possibility to cancel the mission or deactivate the system) and limits to the system usage in specific situations.
Contribution of Spain (September 2021)