There is much common ground among all delegations and in the respective proposals regarding the importance of weapons review at the national level. This, after all, is already an existing legal obligation. There is also growing recognition that exchange of information and good practices is essential in that regard.
The proposal for Principles and Good Practices submitted by the U.S. and its co-sponsors provide useful suggestions for such good practices, particularly in its Article 24 which proposes tests related to compliance with IHL principles and CCW rules, as well as conducting reviews with appropriate understanding of the weapons’ capabilities and limitations, its planned uses, and its anticipated effects in those circumstances; as well as advice on potential practical measures that would assist in ensuring IHL compliance.
In addition, the said proposal provides useful suggestions on good practices related to human-machine interaction, drawing upon the wealth of understanding developed throughout the past nine years of GGE discussions. These include, among others, conducting operations under a responsible command; conducting assessments, investigations, or other reviews of incidents that may involve violations; and taking measures to mitigate the risk of unintended engagements.
These proposals are also salient in other proposals, including the Roadmap and Draft Protocol VI that we submitted with a group of states, which provides potential legal wording that would make these good practices a universal obligation. In addition, the Roadmap and Draft Protocol VI proposes the integration of interdisciplinary perspectives in research and development of autonomous systems. Such an interdisciplinary approach is essential given that autonomy adds a layer of complexity in undertaking weapons review, as demonstrated in the questions posed by the U.K. in the annex to its proposal for an IHL Compendium.
It is clear that on the issue of weapons review, there is broad agreement and you have ample substantive material to prepare a draft substantive report that the GGE can work on in the next session.
You are right in noting in your introductory remarks that among the differences is the question of whether such reviews should be left for national authorities to undertake and implement, or whether they should be based on a multilaterally agreed commitment or obligation. We are on the side of those that believe a legal commitment is a necessary baseline, from which states can develop good practices.
In Draft Protocol VI, for instance, we propose regular consultations of High Contracting Parties, which could provide an institutionalized mechanism for exchange of good practices, transparency and confidence-building, as well as further development of norms related to, among others, obligations with regard to weapons review.
But what my delegation can conclude is that the measures contained in all the proposals will be effective only to the extent that the weapon system being reviewed is understandable and explainable. Compliance with the legal requirement to “respect” and “ensure respect for” IHL principles necessitates, as the proposal for Principles and Good Practices stressed, “rigorous testing and evaluation of systems such as to ensure that they function as anticipated in a realistic operational environment.”
As the Dutch delegation stressed just now, this is difficult to ensure in the context of weapon systems that incorporate AI and machine learning. As demonstrated in interventions made yesterday by some delegations, including ours, AI, algorithms, and machine learning could create a “black box” that processes data and produces a result in a manner so complex as to render it unexplainable even to its programmers.
To be sure, technology could evolve in such manner as to close this gap in terms of explainability. In the meantime, it behooves us to ensure that weapon systems incorporating autonomy are not developed in such a way that its functions cannot be fully explained. This is an understanding that we as a GGE have already reached in our previous sessions.
These considerations further validate our position that the human element is foundational to any two-pillared architecture of prohibitions and regulations. It is nearly impossible to review a weapon whose effects and processes are not understandable. We therefore do not see any reason why we cannot already agree that any weapon system that are beyond explainability and understandability are already ipso facto illegal and should never be developed.
Statement by the Philippines under agenda item 5, topic 5 (9 March 2023, 1st intervention)