It’s been encouraging on this topic to see the attention given during discussions of [AWS] to the conduct of legal reviews. Effective legal reviews are a clear procedural obligation for states party to [AP I], and for all states are critical to ensuring that their armed forces comply with the international humanitarian rules in the use of any weapons system. The International Committee of the Red Cross endorses the view, which has been recognised already by states in all interventions this morning and new proposals submitted to this session, that weapons reviews of [AWS] are a crucial step in the process of ensuring compliance with international law, just as they are with any weapon system that is being studied, developed, acquired or adopted.
[AWS] do raise specific challenges for legal review. At the review stage, there is already the challenge of explainability of artificial intelligence algorithms. The complexity of these algorithms may prevent a reviewer from being able to predict and so to determine whether the weapon’s use would be unlawful in some or all circumstances. And this challenge has been well highlighted this morning and interventions by the delegations of the Netherlands and the Philippines. Then there’s the question posed directly by you, Chair, with systems incorporating machine learning, leading to changes in the functioning of the [AWS] in a way that would affect the conditions in which it would trigger the application of force. With these systems, we are concerned that a legal review conducted before use may well become invalidated following activation as the software modifies itself, leading to a change in the parameters on which that system was reviewed. We consider that this raises the question for states of how they would describe the threshold for a change resulting from machine learning, which should be considered a modification that triggers the need for a new review. We welcome recent efforts of states in this regard, for example, in national positions, such as the US in their directive on autonomy and weapon systems, to articulate relevant thresholds. For the ICRC, we would link this threshold to whether the modification affects the function or technical capability or effect to the extent that the modification would have an impact on compliance with international law. Even with clarity on this legal threshold, we consider there may still be practical challenges of feasibility in conducting ongoing reviews and you have identified already challenges such as financial cost and time. The Austrian working paper suggests a need to constantly review and reassess any possible changes or modifications in the system’s functioning. In practice, such a constant review may not be possible, and it is perhaps more likely to mean that an [AWS], whose targeting functions change during use would be unlawful and prohibited as an unpredictable [AWS].
These are very worthwhile issues for further discussion and articulation. At the same time, we do caution that legal reviews are not a panacea for ensuring IHL compliance. They are one step in a chain of national implementation of [IHL] rules. And their conduct alone cannot then relieve operators and decision makers in conflict from their IHL obligations under key rules such as distinction, proportionality, precautions, that we have discussed a lot earlier this week. As such, whilst we welcome the continued and increased conduct of legal reviews for autonomous and indeed all weapons, we encourage the sharing of best practices in this regard, but this cannot be a substitute for action towards a legally binding international instrument on [AWS]. Ultimately, we see legal reviews as a useful aspect of how such specific prohibitions and restrictions on autonomous systems, once set at the international level, would be operationalized and adhered to in practice.
Statement by the ICRC under agenda item 5, topic 5 (9 March 2023) (transcript)