AI-Enabled Decision Support Systems as Means and Methods of Warfare

For the purposes of this analysis, AI-enabled decision-support systems (AI DSS) are defined as tools that leverage AI techniques to “analyse data, provide actionable recommendations, and assist decision-makers situated at different levels in the chain of command to solve semi-structured and unstructured decision tasks” (Klonowska 2022). In this definition, “AI” refers to machine-learning algorithms, such as deep learning, computer vision, and large language models. The term “decision-makers” is used loosely and is not limited to commanders. “Structured” and “unstructured tasks” refer to a range of responsibilities of decision-makers that are both well-ordered as well as those with no clear-cut solutions (e.g., strategic planning).

Although AI DSS can be used in support of a variety of military tasks – from recruitment to system maintenance – their applications in support of the conduct of hostilities raise particular legal concerns. This article proposes that some AI DSS – specifically those that contribute to the conduct of hostilities (e.g., target identification, collateral damage estimation, selection of weapons) – fall within the scope of Additional Protocol I (AP I) Article 36 legal review’s “means or methods of warfare” and summarizes challenges posed by the review of such systems.

Scope of review

The inclusion of AI DSS systems within the scope of Article 36 of AP I review is unclear. This lack of clarity stems from (1) imprecise textual articulation of what falls within the meaning of “means and methods of warfare”, and (2) still limited engagement of States with this specific question. However, an interpretation of ordinary meaning of Article 36 in light of its object and purpose (Article 31, VCLT) leads to an argument that some AI DSS that contribute to the conduct of hostilities may fall within the scope of review. The object and purpose of Article 36 is to ensure that States comply with the limitation in their choice of “means and methods of warfare” (Article 35, AP I) and that they halt or prevent development or acquisition of equipment that in some or all circumstances may violate international law (Lawand 2006). Where AI DSS may interfere with or undermine States’ efforts to comply with their legal obligations – such as is the case for those AI DSS that contribute to the conduct of hostilities – a legal review is warranted.

First, let me clarify that AI DSS are tools that by definition are not embedded in or do not otherwise control hardware components and as such do not have the capacity to select and engage targets without human intervention. AI DSS are also by definition non-weaponized. Thus, AI DSS do not fall within the meaning of “weapons” of Article 36. In rare situations, there may be uncertainty as to the classification of AI-enabled systems into the AI DSS or autonomous weapons categories. For example, an AI system that is not enabling a weaponized vehicle, vessel, or aircraft but is instead supporting the operator of such a system may create classification uncertainty. If an AI system under certain conditions (e.g., the operator faints) can turn on an automated mode, it should be treated as an integral component of a weapons systems and reviewed as such.

Second, besides weapons, States party to AP I also have an obligation to conduct a legal review of “means or methods of warfare”. Means of warfare – whilst the term lacks a treaty definition – have been described either as synonymous to weapons or as tools that whilst they do not constitute a weapon, they nevertheless directly or indirectly contribute to the offensive capabilities (McClelland 2003, p 405; Boulanin and Verbruggen 2017). For indirect contribution, there needs to be a sequential “chain of effects” that links the tool to the intended or inflicted damage, harm, or injury (Boothby 2013, p 389). Cyber weapons are one example of such tools that qualify as means of warfare whenever they are “used, designed, or intended to be used to cause injury to, or death of, persons or damage to, or destruction of, objects” (Schmitt et al. 2017, pp 452-453).

AI DSS are tools that can both directly and indirectly contribute to the offensive capabilities of the State. AI DSS may be employed to conduct target identification or verification, estimate collateral damage, suggest courses of action or weapons, provide warnings of incoming threats, among others. The purpose of AI DSS is to process data, identify patterns, and produce actionable information that decision-makers leverage to conduct operations. The chain reaction from AI DSS output to the conduct of hostilities can be more or less direct, depending on the way these systems are integrated into operations, availability of alternative sources of information, human-machine interaction, cognitive load, and many more factors. The more users are reliant on AI DSS output in conduct of hostilities the more direct the impact of AI DSS on offensive capabilities. The reliance on AI DSS, however, is not static. The impact of AI DSS changes dynamically, oscillating between direct and indirect even during the same operation. Therefore, all AI DSS that process data to provide actionable outputs that can affect the conduct of hostilities should be considered to fall within the meaning of means of warfare and reviewed as such.

At the same time, AI DSS may also be considered to fall within the meaning of methods of warfare. Methods of warfare are interpreted as strategies, approaches, tactics, techniques or procedures that determine how weapons are used or how hostilities are conducted within the context of a specific armed conflict, operation or attack (e.g., ICRC 2006, pp 9-10). Some States consider their doctrine, manuals, or rules of engagement to constitute means of warfare (Boulanin and Verbruggen 2017). The Tallinn Manual 2.0 also clarifies that the “cyber tactics, techniques, and procedures by which hostilities are conducted” should be treated as methods of warfare (Schmitt et al. 2017, p 453).

In this respect, AI systems can also be treated as a source of strategies, approaches, or tactics that determine how weapons would be used or hostilities conducted. For example, an AI system that is integrated into a weapon contributes to the way that the weapon is operated and how it inflicts damage, harm, or injury, thus constituting not only an integral part of the weapons system but also a method of warfare. In relation to AI DSS, whenever they are used in relation to the conduct of hostilities, their outputs may constitute strategies, approaches, and tactics which shape the conduct of hostilities. For example, a large language model that is leveraged to generate courses of action is an example of a tool that can constitute a method of warfare.

As shown, there are significant overlaps in the analysis of AI DSS classification under Article 36. Although scholars are increasingly raising awareness of the need to review AI DSS that contribute to the conduct of hostilities, they have different views whether these systems should be classified as means or methods of warfare (e.g., Dorsey and Bo 2025). While weapons remain a distinct and well-understood category, it is notable that “means or methods of warfare” are a less clearly-defined categories, with some States using these terms jointly in their doctrines (Boulanin and Verbruggen 2017).

Taking the purpose of Article 36 into consideration, there is no specific legal reason to separate the reviews of “means or methods” into distinct categories, seeing as both stem from the same provision, follow the same standard, and fulfill the same purpose. In fact, the ICRC suggests that “a weapon or means of warfare cannot be assessed in isolation from the method of warfare by which it is to be used” (ICRC 2006, pp 10). The three elements are connected, noting that it is not only “the design or intended purpose, but also the manner in which it is expected to be used on the battlefield” that have to be considered cumulatively in the legal review (ibid.).

Further, it should be considered whether a clear distinction of weapons, means or methods of warfare may be warranted by other means, outside of the ambit of Article 36. Some have suggested that under customary international law, States non-Party to AP I are obliged to review weapons only, hence excluding means and methods of warfare from their customary legal review obligations. However, the study of Jevglevskaja (2018) demonstrates that there is insufficient State practice to sustain this claim. Hence, customary international law does not have a bearing on the treatment of State obligations to review weapons, means or methods of warfare unless satisfactorily shown otherwise (ibid.). There are also other articles in the context of AP I that suggest that means and methods of warfare should be treated jointly in the analysis of their legality. For instance, Article 57(2)(a)(ii) requires Parties to “take all feasible precautions in the choice of means and methods of attack”. Similarly, Article 35 (preceding Article 36 legal review) limits the right of Parties “to choose methods or means of warfare”.

Given the preceding discussion, there are at least three reasonable approaches to categorising AI DSS under Article 36 legal review: they may be classified as (1) means of warfare, (2) methods of warfare, or (3) both means and methods of warfare. Owing to the characteristics of AI DSS — such as the ease of repurposing and capacity for multifaceted applications — significant conceptual overlap among these categories is to be expected. Moreover, Article 36 does not establish a legal basis for strictly distinguishing between means and methods of warfare. Accordingly, this commentary contends that, AI DSS shall fall within both means and methods of warfare, highlighting that States should undertake rigorous review measures and exercise appropriate discretion to prevent the development, acquisition, and ultimately use of all such AI DSS that can in some or all circumstances violate or lead to violations of international law.

Lastly, it should be noted that States may need to implement preliminary assessments of AI DSS and their impact on the conduct of hostilities to determine whether the system falls within the meaning of the means and methods of warfare and requires full legal review. States are advised to be over-inclusive, seeing as the purpose of the legal review is to prevent and mitigate risks of deployment that may contribute to the unlawful conduct of hostilities.

Standard of review

The second step in the legal review is to determine whether the system is subject to any prohibitions or restrictions.

There may be either international treaties or regional/national legislation imposing certain prohibitions on the design, development, or use of AI DSS: 

  • Internationally: AI DSS is currently not subject to any specific regulations at the international level. Nonetheless, developments within the framework of the Convention on Conventional Weapons Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems (LAWS) may have an impact not only on LAWS but also more broadly the integration of AI into military operations. 
  • Regionally/domestically: There may be domestic or regional instruments that regulate and/or restrict uses of AI. Although currently those legal instruments are mostly focused on civilian applications, it is possible that new legislation will emerge that will also address military applications. In the conduct of a legal review each State should consider applicable legal instruments that may restrict and/or prohibit use of certain AI DSS applications.

Given the current regulatory landscape, AI DSS per se are not unlawful. Rather the review of AI DSS would need to consider whether the effects it may cause are prohibited and/or whether its use would undermine other legal obligations of States under targeting law.

In addition to prohibitions, States need to reflect during the legal review on the restrictions that may apply to the design, development and use of AI DSS. There are three prohibited effects under IHL and customary international law: (i) superfluous injury or unnecessary suffering, (ii) long-term and wide-spread harm to the natural environment, and (iii) indiscriminate targeting (Vestner and Rossi 2021, p 526). When AI DSS fall within the scope of the Article 36 review, it should be reviewed whether they may cause or lead to any of the three effects. It is most likely that, out of the three effects, the question whether AI DSS may indirectly contribute to indiscriminate targeting is going to be the most prominent one. This will require a review of design specifications, target profiles it is programmed to suggest, reported performance metrics, and the adequacy of its design to the expected uses, among others.

Once these preliminary considerations are reviewed, the reviewing authority assesses whether AI DSS might undermine compliance with targeting law. Scholars argue that a legal review does not only relate to weapons law and existing prohibitions, but that it also should consider the manner and circumstances of use such that compliance with targeting law is possible (Boothby 2018, p. 40). Sanders and Copeland (2020) also argued that the legal review should engage in an assessment of IHL obligations, which may impact on “the degree of human input to targeting decisions, the acceptable standard of machine compliance, and the measures for ensuring ongoing compliance with IHL.”

This is where the analysis of AI DSS in relation to obligations under IHL pose the greatest challenges, specifically because the use of AI DSS in compliance with IHL will depend on numerous factors. For example, reviewing authority may need to consider whether the way the system is intended to be used leaves enough room for required degrees of human judgement; whether operators of such systems require training in order to ensure proper understanding and mitigate risks of misuse of the system; whether the system has a sufficient level of cybersecurity to prevent or mitigate the risks of adversarial attacks, and many more. The expected minimal performance levels of the system will also cause uncertainties in the legal review process, seeing as AI DSS may rapidly evolve during use, especially in fast-changing conflict environments, thus leading to fluctuations in precision and accuracy. Review authorities will have to assess whether the testing environments in which the performance metrics of AI DSS were developed match the intended area of use and therefore provide a reliable indicator of performance and keep updating it according to the changing circumstances.

Conduct of iterative reviews

Given the evolving nature of AI DSS, which can learn from data gathered in a deployment environment (online learning) or from new uploaded data batches (offline learning), a singular legal review cannot satisfy the need to ensure that the systems used in the conduct of hostilities ensure compliance with IHL.

Therefore, an iterative process is necessary in which AI DSS and their implementation into targeting practices are monitored and evaluated. These iterative review procedures may be triggered by significant updates to the AI software, by changes in the intended use, or by changes in the area of use (e.g., large-scale destruction which may interfere with AI DSS’ capability to detect objects). Legal reviews should be conducted iteratively, accounting for the changes not only in the model updates but also the operational environment. The initial legal review of the AI DSS should determine the appropriate frequency of legal reviews and circumstances that may trigger additional ad hoc reviews (e.g., poor verification results). The frequency should be influenced by the learning mode (online/offline), the severity of risks in cases of suboptimal performance, and the types of adversarial actors and their technological capabilities which may interfere with the proper functioning of the AI system.

Outcome and effect

There may be two possible outcomes of a legal review of AI DSS: a rejection or (conditional) approval.

First, an AI DSS may be considered to pose too high risks for compliance with IHL so as to warrant its rejection. This may happen whenever the AI DSS performance is insufficient so as to consider the risk to civilians to be excessive or when the expected uses of the system do not match its intended design. As previously mentioned in relation to the ICRC report, the legality of a weapon, means or methods of warfare does not depend only on its “design or intended purpose”, but also its “expected” context of use. The determination of whether the design matches the expected context of use is more difficult in relation to AI-enabled systems, but it should be a decision informed by an adequate documentation of the concept, design, research and development of the system. It is also possible that an AI DSS is rejected during legal review due to its opacity. AI DSS that do not have a degree of explainability, interpretability, or traceability may pose grave risks to the ability of the decision-makers to comply with relevant international legal provisions.

Second, a review of AI DSS may also result in an approval. In these cases, however, the reviewing authority should specify the conditions required to ensure compliance with IHL when using such AI DSS. For example, temporal or geographical restrictions may be imposed. Another restriction may be a requirement that only operators who have been certified to operate a certain AI DSS are allowed to use it, due to the complexity and/or risks associated with its use. The reviewing authority may also enumerate the types of precautionary measures that should be taken by decision-makers (e.g., information verification) when using AI DSS whenever overreliance on the AI output risks non-compliance with applicable international law. 

Summary

Given the potential of AI DSS to directly or indirectly influence offensive capabilities and thus also targeting decisions and conduct of hostilities, AI DSS should be subject to thorough and iterative legal reviews – considering both their design and expected operational use – to ensure they do not contribute to unlawful conduct or undermine existing legal obligations.

References

6209884 3HUVHPXM 1 chicago-author-date 50 creator asc 735 https://legalreviewportal.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%228BCFC7NF%22%2C%22library%22%3A%7B%22id%22%3A6209884%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Boothby%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBoothby%2C%20William.%202018.%20%26%23x201C%3BDehumanization%3A%20Is%20There%20a%20Legal%20Problem%20Under%20Article%2036%3F%26%23x201D%3B%20In%20%26lt%3Bi%26gt%3BDehumanization%20of%20Warfare%3A%20Legal%20Implications%20of%20New%20Weapon%20Technologies%26lt%3B%5C%2Fi%26gt%3B%2C%20edited%20by%20Wolff%20Heintschel%20von%20Heinegg%2C%20Robert%20Frau%2C%20and%20Tassilo%20Singer.%20Springer%20International%20Publishing.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-319-67266-3_3%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-319-67266-3_3%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Dehumanization%3A%20Is%20There%20a%20Legal%20Problem%20Under%20Article%2036%3F%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22William%22%2C%22lastName%22%3A%22Boothby%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Wolff%22%2C%22lastName%22%3A%22Heintschel%20von%20Heinegg%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Frau%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Tassilo%22%2C%22lastName%22%3A%22Singer%22%7D%5D%2C%22abstractNote%22%3A%22While%20remote%20attack%2C%20whether%20using%20remote%20piloting%2C%20autonomous%20attack%20technology%20or%20cyber%20techniques%2C%20does%20not%20per%20se%20raise%20legal%20issues%2C%20there%20is%20a%20clear%20ethical%20dimension.%20People%20are%20nevertheless%20closely%20involved%2C%20fulfilling%20various%20critical%20roles.%20All%20forms%20of%20mechanical%20learning%20associated%20with%20attack%20technologies%20are%20not%20unacceptable.%20Consider%2C%20for%20example%2C%20learning%20methods%20integrated%20into%20a%20weapon%20system%20that%20are%20designed%20to%20increase%20or%20ensure%20victim%20protection.%20Ethical%20concerns%20will%2C%20however%2C%20persist%20and%20will%20be%20associated%20with%20concerns%20that%20machines%20should%20not%20be%20permitted%20to%20decide%20who%20is%20to%20be%20attacked%20and%20who%20is%20to%20be%20spared.%20Zero%20casualty%20warfare%20is%20not%20as%20such%20unlawful.%20Customary%20and%20treaty%20rules%20of%20weapons%20law%20apply%20to%20these%20weapon%20technologies%20including%20the%20obligation%20for%20states%20to%20undertake%20weapon%20reviews.%20The%20Chapter%20summarises%20these%20customary%20and%20treaty%20rules%20and%20notes%20that%20reviewing%20autonomous%20weapon%20technologies%20will%20involve%20an%20assessment%20of%20whether%20the%20weapon%20system%20is%20capable%20of%20undertaking%20the%20decision-making%20that%20targeting%20law%20requires%2C%20and%20to%20which%20reference%20is%20made%20in%20the%20Chapter.%22%2C%22bookTitle%22%3A%22Dehumanization%20of%20Warfare%3A%20Legal%20Implications%20of%20New%20Weapon%20Technologies%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22978-3-319-67266-3%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-319-67266-3_3%22%2C%22collections%22%3A%5B%223HUVHPXM%22%5D%2C%22dateModified%22%3A%222025-10-02T05%3A34%3A16Z%22%7D%7D%2C%7B%22key%22%3A%229RBADTLP%22%2C%22library%22%3A%7B%22id%22%3A6209884%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Boothby%22%2C%22parsedDate%22%3A%222013%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBoothby%2C%20William%20H.%202013.%20%26%23x201C%3BMethods%20and%20Means%20of%20Cyber%20Warfare.%26%23x201D%3B%20%26lt%3Bi%26gt%3BInternational%20Law%20Studies%26lt%3B%5C%2Fi%26gt%3B%2089%3A%20387%26%23x2013%3B405.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdigital-commons.usnwc.edu%5C%2Fils%5C%2Fvol89%5C%2Fiss1%5C%2F8%5C%2F%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdigital-commons.usnwc.edu%5C%2Fils%5C%2Fvol89%5C%2Fiss1%5C%2F8%5C%2F%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Methods%20and%20Means%20of%20Cyber%20Warfare%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22William%20H.%22%2C%22lastName%22%3A%22Boothby%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222013%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdigital-commons.usnwc.edu%5C%2Fils%5C%2Fvol89%5C%2Fiss1%5C%2F8%5C%2F%22%2C%22collections%22%3A%5B%227E3WF4LA%22%2C%22PHLANGU9%22%2C%223HUVHPXM%22%5D%2C%22dateModified%22%3A%222025-09-28T03%3A57%3A38Z%22%7D%7D%2C%7B%22key%22%3A%2273MKR6C9%22%2C%22library%22%3A%7B%22id%22%3A6209884%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Boulanin%20and%20Verbruggen%22%2C%22parsedDate%22%3A%222017-12%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBoulanin%2C%20Vincent%2C%20and%20Maaike%20Verbruggen.%202017.%20%26lt%3Bi%26gt%3BArticle%2036%20Reviews%3A%20Dealing%20with%20the%20Challenges%20Posed%20by%20Emerging%20Technologies%26lt%3B%5C%2Fi%26gt%3B.%20SIPRI.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sipri.org%5C%2Fpublications%5C%2F2017%5C%2Fpolicy-reports%5C%2Farticle-36-reviews-dealing-challenges-posed-emerging-technologies%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fwww.sipri.org%5C%2Fpublications%5C%2F2017%5C%2Fpolicy-reports%5C%2Farticle-36-reviews-dealing-challenges-posed-emerging-technologies%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22report%22%2C%22title%22%3A%22Article%2036%20Reviews%3A%20Dealing%20with%20the%20Challenges%20Posed%20by%20Emerging%20Technologies%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vincent%22%2C%22lastName%22%3A%22Boulanin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maaike%22%2C%22lastName%22%3A%22Verbruggen%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22reportNumber%22%3A%22%22%2C%22reportType%22%3A%22%22%2C%22institution%22%3A%22SIPRI%22%2C%22date%22%3A%222017%20Dec%22%2C%22language%22%3A%22en%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sipri.org%5C%2Fpublications%5C%2F2017%5C%2Fpolicy-reports%5C%2Farticle-36-reviews-dealing-challenges-posed-emerging-technologies%22%2C%22collections%22%3A%5B%2263CAG8VX%22%2C%22Z9QTVX6A%22%2C%223HUVHPXM%22%5D%2C%22dateModified%22%3A%222025-09-28T03%3A57%3A37Z%22%7D%7D%2C%7B%22key%22%3A%22ZGCZMIJM%22%2C%22library%22%3A%7B%22id%22%3A6209884%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Dorsey%20and%20Bo%22%2C%22parsedDate%22%3A%222025-06-27%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDorsey%2C%20Jessica%2C%20and%20Marta%20Bo.%202025.%20%26%23x201C%3BAI-Enabled%20Decision-Support%20Systems%20in%20the%20Joint%20Targeting%20Cycle%3A%20Legal%20Challenges%2C%20Risks%2C%20and%20the%20Human%28e%29%20Dimension.%26%23x201D%3B%20SSRN%20Scholarly%20Paper%20No.%205327115.%20Social%20Science%20Research%20Network%2C%20June%2027.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.2139%5C%2Fssrn.5327115%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.2139%5C%2Fssrn.5327115%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22AI-Enabled%20Decision-Support%20Systems%20in%20the%20Joint%20Targeting%20Cycle%3A%20Legal%20Challenges%2C%20Risks%2C%20and%20the%20Human%28e%29%20Dimension%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jessica%22%2C%22lastName%22%3A%22Dorsey%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marta%22%2C%22lastName%22%3A%22Bo%22%7D%5D%2C%22abstractNote%22%3A%22%26lt%3Bp%26gt%3B%26lt%3Bspan%26gt%3BAs%20artificial%20intelligence%20%28AI%29-enabled%20technologies%20become%20increasingly%20integrated%20into%20military%20targeting%20operations%2C%20the%20legal%2C%20ethical%2C%20and%20operati%22%2C%22genre%22%3A%22SSRN%20Scholarly%20Paper%22%2C%22repository%22%3A%22Social%20Science%20Research%20Network%22%2C%22archiveID%22%3A%225327115%22%2C%22date%22%3A%222025-06-27%22%2C%22DOI%22%3A%2210.2139%5C%2Fssrn.5327115%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fpapers.ssrn.com%5C%2Fabstract%3D5327115%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%223HUVHPXM%22%5D%2C%22dateModified%22%3A%222025-10-02T05%3A36%3A27Z%22%7D%7D%2C%7B%22key%22%3A%22PAP5C3ER%22%2C%22library%22%3A%7B%22id%22%3A6209884%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22International%20Committee%20of%20the%20Red%20Cross%22%2C%22parsedDate%22%3A%222006%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BInternational%20Committee%20of%20the%20Red%20Cross.%202006.%20%26%23x201C%3BA%20Guide%20to%20the%20Legal%20Review%20of%20New%20Weapons%2C%20Means%20and%20Methods%20of%20Warfare%3A%20Measures%20to%20Implement%20Article%2036%20of%20Additional%20Protocol%20I%20of%201977.%26%23x201D%3B%20%26lt%3Bi%26gt%3BInternational%20Review%20of%20the%20Red%20Cross%26lt%3B%5C%2Fi%26gt%3B%2088%20%28864%29%3A%20931%26%23x2013%3B56.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1017%5C%2FS1816383107000938%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1017%5C%2FS1816383107000938%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Guide%20to%20the%20Legal%20Review%20of%20New%20Weapons%2C%20Means%20and%20Methods%20of%20Warfare%3A%20Measures%20to%20Implement%20Article%2036%20of%20Additional%20Protocol%20I%20of%201977%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22name%22%3A%22International%20Committee%20of%20the%20Red%20Cross%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222006%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1017%5C%2FS1816383107000938%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%2263CAG8VX%22%2C%227QYDMUR4%22%2C%223HUVHPXM%22%5D%2C%22dateModified%22%3A%222025-09-28T03%3A57%3A37Z%22%7D%7D%2C%7B%22key%22%3A%22UBFB88RD%22%2C%22library%22%3A%7B%22id%22%3A6209884%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Jevglevskaja%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BJevglevskaja%2C%20Natalia.%202018.%20%26%23x201C%3BWeapons%20Review%20Obligation%20under%20Customary%20International%20Law.%26%23x201D%3B%20%26lt%3Bi%26gt%3BInternational%20Law%20Studies%26lt%3B%5C%2Fi%26gt%3B%2094%3A%20186%26%23x2013%3B221.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdigital-commons.usnwc.edu%5C%2Fils%5C%2Fvol94%5C%2Fiss1%5C%2F8%5C%2F%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdigital-commons.usnwc.edu%5C%2Fils%5C%2Fvol94%5C%2Fiss1%5C%2F8%5C%2F%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Weapons%20Review%20Obligation%20under%20Customary%20International%20Law%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Natalia%22%2C%22lastName%22%3A%22Jevglevskaja%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdigital-commons.usnwc.edu%5C%2Fils%5C%2Fvol94%5C%2Fiss1%5C%2F8%5C%2F%22%2C%22collections%22%3A%5B%22KSIB6VHM%22%2C%223HUVHPXM%22%5D%2C%22dateModified%22%3A%222025-09-28T03%3A57%3A39Z%22%7D%7D%2C%7B%22key%22%3A%22ECH4IN3W%22%2C%22library%22%3A%7B%22id%22%3A6209884%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Klonowska%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKlonowska%2C%20Klaudia.%202022.%20%26%23x201C%3BArticle%2036%3A%20Review%20of%20AI%20Decision-Support%20Systems%20and%20Other%20Emerging%20Technologies%20of%20Warfare.%26%23x201D%3B%20In%20%26lt%3Bi%26gt%3BYearbook%20of%20International%20Humanitarian%20Law%26lt%3B%5C%2Fi%26gt%3B.%20T.M.C.%20Asser%20Press.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-94-6265-491-4_6%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-94-6265-491-4_6%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Article%2036%3A%20Review%20of%20AI%20Decision-Support%20Systems%20and%20Other%20Emerging%20Technologies%20of%20Warfare%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaudia%22%2C%22lastName%22%3A%22Klonowska%22%7D%5D%2C%22abstractNote%22%3A%22Artificial%20intelligence%20%28AI%29%20decision-support%20systems%20significantly%20impact%20how%20States%20make%20warfare%20decisions%2C%20conduct%20hostilities%2C%20and%20whether%20they%20comply%20with%20international%20humanitarian%20law.%20Decision-support%20systems%2C%20even%20if%20they%20do%20not%20autonomously%20execute%20targets%2C%20can%20play%20a%20critical%20role%20in%20the%20long%20chain%20of%20human-machine%20and%20machine-machine%20decision-making%20infrastructure%2C%20thus%20contributing%20to%20the%20co-production%20of%20hostilities.%22%2C%22bookTitle%22%3A%22Yearbook%20of%20International%20Humanitarian%20Law%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22978-94-6265-491-4%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-94-6265-491-4_6%22%2C%22collections%22%3A%5B%223HUVHPXM%22%5D%2C%22dateModified%22%3A%222025-10-02T05%3A50%3A37Z%22%7D%7D%2C%7B%22key%22%3A%22FKIVUL3T%22%2C%22library%22%3A%7B%22id%22%3A6209884%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lawand%22%2C%22parsedDate%22%3A%222006%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLawand%2C%20Kathleen.%202006.%20%26%23x201C%3BReviewing%20the%20Legality%20of%20New%20Weapons%2C%20Means%20and%20Methods%20of%20Warfare.%26%23x201D%3B%20%26lt%3Bi%26gt%3BInternational%20Review%20of%20the%20Red%20Cross%26lt%3B%5C%2Fi%26gt%3B%2088%20%28864%29%3A%20925%26%23x2013%3B30.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1017%5C%2FS1816383107000884%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1017%5C%2FS1816383107000884%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Reviewing%20the%20Legality%20of%20New%20Weapons%2C%20Means%20and%20Methods%20of%20Warfare%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kathleen%22%2C%22lastName%22%3A%22Lawand%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222006%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1017%5C%2FS1816383107000884%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%2263CAG8VX%22%2C%223HUVHPXM%22%5D%2C%22dateModified%22%3A%222025-10-02T06%3A25%3A10Z%22%7D%7D%2C%7B%22key%22%3A%22IXFZIQ7M%22%2C%22library%22%3A%7B%22id%22%3A6209884%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22McClelland%22%2C%22parsedDate%22%3A%222003%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMcClelland%2C%20Justin.%202003.%20%26%23x201C%3BThe%20Review%20of%20Weapons%20in%20Accordance%20with%20Article%2036%20of%20Additional%20Protocol%20I.%26%23x201D%3B%20%26lt%3Bi%26gt%3BInternational%20Review%20of%20the%20Red%20Cross%26lt%3B%5C%2Fi%26gt%3B%2085%20%28850%29%3A%20397%26%23x2013%3B420.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1017%5C%2FS1560775500115226%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1017%5C%2FS1560775500115226%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22The%20Review%20of%20Weapons%20in%20Accordance%20with%20Article%2036%20of%20Additional%20Protocol%20I%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Justin%22%2C%22lastName%22%3A%22McClelland%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222003%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1017%5C%2FS1560775500115226%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%2263CAG8VX%22%2C%223HUVHPXM%22%5D%2C%22dateModified%22%3A%222025-10-02T06%3A25%3A14Z%22%7D%7D%2C%7B%22key%22%3A%22RTQUBRZ4%22%2C%22library%22%3A%7B%22id%22%3A6209884%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Sanders%20and%20Copeland%22%2C%22parsedDate%22%3A%222020-11-27%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSanders%2C%20Lauren%2C%20and%20Damian%20Copeland.%202020.%20%26%23x201C%3BDeveloping%20an%20Approach%20to%20the%20Legal%20Review%20of%20Autonomous%20Weapon%20Systems.%26%23x201D%3B%20%26lt%3Bi%26gt%3BILA%20Reporter%26lt%3B%5C%2Fi%26gt%3B%2C%20November%2027.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Filareporter.org.au%5C%2F2020%5C%2F11%5C%2Fdeveloping-an-approach-to-the-legal-review-of-autonomous-weapon-systems-lauren-sanders-and-damian-copeland%5C%2F%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Filareporter.org.au%5C%2F2020%5C%2F11%5C%2Fdeveloping-an-approach-to-the-legal-review-of-autonomous-weapon-systems-lauren-sanders-and-damian-copeland%5C%2F%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22blogPost%22%2C%22title%22%3A%22Developing%20an%20Approach%20to%20the%20Legal%20Review%20of%20Autonomous%20Weapon%20Systems%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lauren%22%2C%22lastName%22%3A%22Sanders%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Damian%22%2C%22lastName%22%3A%22Copeland%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22blogTitle%22%3A%22ILA%20Reporter%22%2C%22date%22%3A%222020-11-27%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Filareporter.org.au%5C%2F2020%5C%2F11%5C%2Fdeveloping-an-approach-to-the-legal-review-of-autonomous-weapon-systems-lauren-sanders-and-damian-copeland%5C%2F%22%2C%22language%22%3A%22en-AU%22%2C%22collections%22%3A%5B%22ZDM4MR7K%22%2C%223HUVHPXM%22%5D%2C%22dateModified%22%3A%222025-09-28T03%3A57%3A37Z%22%7D%7D%2C%7B%22key%22%3A%22TW7GRJM9%22%2C%22library%22%3A%7B%22id%22%3A6209884%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Schmitt%22%2C%22parsedDate%22%3A%222017%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSchmitt%2C%20Michael%20N%2C%20ed.%202017.%20%26%23x201C%3BRule%20103.%20Definitions%20of%20Means%20and%20Methods%20of%20Warfare.%26%23x201D%3B%20In%20%26lt%3Bi%26gt%3BTallinn%20Manual%202.0%20on%20the%20International%20Law%20Applicable%20to%20Cyber%20Operations%26lt%3B%5C%2Fi%26gt%3B.%20Cambridge%20University%20Press.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1017%5C%2F9781316822524.023%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1017%5C%2F9781316822524.023%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Rule%20103.%20Definitions%20of%20means%20and%20methods%20of%20warfare%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Michael%20N%22%2C%22lastName%22%3A%22Schmitt%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22bookTitle%22%3A%22Tallinn%20Manual%202.0%20on%20the%20International%20Law%20Applicable%20to%20Cyber%20Operations%22%2C%22date%22%3A%222017%22%2C%22language%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1017%5C%2F9781316822524.023%22%2C%22collections%22%3A%5B%227E3WF4LA%22%2C%223HUVHPXM%22%5D%2C%22dateModified%22%3A%222025-10-02T05%3A42%3A39Z%22%7D%7D%2C%7B%22key%22%3A%22SPEI4K6R%22%2C%22library%22%3A%7B%22id%22%3A6209884%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Vestner%20and%20Rossi%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BVestner%2C%20Tobias%2C%20and%20Altea%20Rossi.%202021.%20%26%23x201C%3BLegal%20Reviews%20of%20War%20Algorithms.%26%23x201D%3B%20%26lt%3Bi%26gt%3BInternational%20Law%20Studies%26lt%3B%5C%2Fi%26gt%3B%2097%20%281%29%3A%20509%26%23x2013%3B55.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdigital-commons.usnwc.edu%5C%2Fils%5C%2Fvol97%5C%2Fiss1%5C%2F26%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdigital-commons.usnwc.edu%5C%2Fils%5C%2Fvol97%5C%2Fiss1%5C%2F26%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Legal%20Reviews%20of%20War%20Algorithms%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tobias%22%2C%22lastName%22%3A%22Vestner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Altea%22%2C%22lastName%22%3A%22Rossi%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222021%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%222375-2831%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdigital-commons.usnwc.edu%5C%2Fils%5C%2Fvol97%5C%2Fiss1%5C%2F26%22%2C%22collections%22%3A%5B%22ZDM4MR7K%22%2C%223HUVHPXM%22%5D%2C%22dateModified%22%3A%222025-09-28T07%3A32%3A58Z%22%7D%7D%5D%7D
Boothby, William. 2018. “Dehumanization: Is There a Legal Problem Under Article 36?” In Dehumanization of Warfare: Legal Implications of New Weapon Technologies, edited by Wolff Heintschel von Heinegg, Robert Frau, and Tassilo Singer. Springer International Publishing. https://doi.org/10.1007/978-3-319-67266-3_3.
Boothby, William H. 2013. “Methods and Means of Cyber Warfare.” International Law Studies 89: 387–405. https://digital-commons.usnwc.edu/ils/vol89/iss1/8/.
Boulanin, Vincent, and Maaike Verbruggen. 2017. Article 36 Reviews: Dealing with the Challenges Posed by Emerging Technologies. SIPRI. https://www.sipri.org/publications/2017/policy-reports/article-36-reviews-dealing-challenges-posed-emerging-technologies.
Dorsey, Jessica, and Marta Bo. 2025. “AI-Enabled Decision-Support Systems in the Joint Targeting Cycle: Legal Challenges, Risks, and the Human(e) Dimension.” SSRN Scholarly Paper No. 5327115. Social Science Research Network, June 27. https://doi.org/10.2139/ssrn.5327115.
International Committee of the Red Cross. 2006. “A Guide to the Legal Review of New Weapons, Means and Methods of Warfare: Measures to Implement Article 36 of Additional Protocol I of 1977.” International Review of the Red Cross 88 (864): 931–56. https://doi.org/10.1017/S1816383107000938.
Jevglevskaja, Natalia. 2018. “Weapons Review Obligation under Customary International Law.” International Law Studies 94: 186–221. https://digital-commons.usnwc.edu/ils/vol94/iss1/8/.
Klonowska, Klaudia. 2022. “Article 36: Review of AI Decision-Support Systems and Other Emerging Technologies of Warfare.” In Yearbook of International Humanitarian Law. T.M.C. Asser Press. https://doi.org/10.1007/978-94-6265-491-4_6.
Lawand, Kathleen. 2006. “Reviewing the Legality of New Weapons, Means and Methods of Warfare.” International Review of the Red Cross 88 (864): 925–30. https://doi.org/10.1017/S1816383107000884.
McClelland, Justin. 2003. “The Review of Weapons in Accordance with Article 36 of Additional Protocol I.” International Review of the Red Cross 85 (850): 397–420. https://doi.org/10.1017/S1560775500115226.
Sanders, Lauren, and Damian Copeland. 2020. “Developing an Approach to the Legal Review of Autonomous Weapon Systems.” ILA Reporter, November 27. https://ilareporter.org.au/2020/11/developing-an-approach-to-the-legal-review-of-autonomous-weapon-systems-lauren-sanders-and-damian-copeland/.
Schmitt, Michael N, ed. 2017. “Rule 103. Definitions of Means and Methods of Warfare.” In Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations. Cambridge University Press. https://doi.org/10.1017/9781316822524.023.
Vestner, Tobias, and Altea Rossi. 2021. “Legal Reviews of War Algorithms.” International Law Studies 97 (1): 509–55. https://digital-commons.usnwc.edu/ils/vol97/iss1/26.

Further resources

6209884 RX4G54DW 1 chicago-author-date 50 creator asc 735 https://legalreviewportal.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22YXZECL82%22%2C%22library%22%3A%7B%22id%22%3A6209884%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Meier%22%2C%22parsedDate%22%3A%222022-12-27%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMeier%2C%20Michael%20W.%202022.%20%26%23x201C%3BResponsible%20AI%20Symposium%26%23x2014%3BResponsible%20AI%20and%20Legal%20Review%20of%20Weapons.%26%23x201D%3B%20%26lt%3Bi%26gt%3BArticles%20of%20War%26lt%3B%5C%2Fi%26gt%3B%2C%20December%2027.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Flieber.westpoint.edu%5C%2Fresponsible-ai-legal-review-weapons%5C%2F%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Flieber.westpoint.edu%5C%2Fresponsible-ai-legal-review-weapons%5C%2F%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22blogPost%22%2C%22title%22%3A%22Responsible%20AI%20Symposium%5Cu2014Responsible%20AI%20and%20Legal%20Review%20of%20Weapons%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%20W.%22%2C%22lastName%22%3A%22Meier%22%7D%5D%2C%22abstractNote%22%3A%22The%20answer%20to%20whether%20or%20how%20legal%20weapons%20review%20processes%20can%20support%20implementation%20of%20the%20DoD%20AI%20Ethical%20Principles%20is%20mixed.%22%2C%22blogTitle%22%3A%22Articles%20of%20War%22%2C%22date%22%3A%222022-12-27%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flieber.westpoint.edu%5C%2Fresponsible-ai-legal-review-weapons%5C%2F%22%2C%22language%22%3A%22en-US%22%2C%22collections%22%3A%5B%22ZDM4MR7K%22%2C%22RX4G54DW%22%5D%2C%22dateModified%22%3A%222025-10-02T06%3A07%3A33Z%22%7D%7D%2C%7B%22key%22%3A%22P76SNLNC%22%2C%22library%22%3A%7B%22id%22%3A6209884%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mimran%20et%20al.%22%2C%22parsedDate%22%3A%222024-02-02%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMimran%2C%20Tal%2C%20Magda%20Pacholska%2C%20Gal%20Dahan%2C%20and%20Lena%20Trabucco.%202024.%20%26%23x201C%3BIsrael-Hamas%202024%20Symposium%26%23x2014%3BBeyond%20the%20Headlines%3A%20Combat%20Deployment%20of%20Military%20AI-Based%20Systems%20by%20the%20IDF.%26%23x201D%3B%20%26lt%3Bi%26gt%3BArticles%20of%20War%26lt%3B%5C%2Fi%26gt%3B%2C%20February%202.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Flieber.westpoint.edu%5C%2Fbeyond-headlines-combat-deployment-military-ai-based-systems-idf%5C%2F%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Flieber.westpoint.edu%5C%2Fbeyond-headlines-combat-deployment-military-ai-based-systems-idf%5C%2F%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22blogPost%22%2C%22title%22%3A%22Israel-Hamas%202024%20Symposium%5Cu2014Beyond%20the%20Headlines%3A%20Combat%20Deployment%20of%20Military%20AI-Based%20Systems%20by%20the%20IDF%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tal%22%2C%22lastName%22%3A%22Mimran%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magda%22%2C%22lastName%22%3A%22Pacholska%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gal%22%2C%22lastName%22%3A%22Dahan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lena%22%2C%22lastName%22%3A%22Trabucco%22%7D%5D%2C%22abstractNote%22%3A%22Considering%20appropriate%20ways%20to%20introduce%20military%20AI%20onto%20the%20battlefield%2C%20there%20is%20room%20for%20prudence%2C%20as%20there%20is%20no%20benchmark%20to%20follow.%22%2C%22blogTitle%22%3A%22Articles%20of%20War%22%2C%22date%22%3A%222024-02-02%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flieber.westpoint.edu%5C%2Fbeyond-headlines-combat-deployment-military-ai-based-systems-idf%5C%2F%22%2C%22language%22%3A%22en-US%22%2C%22collections%22%3A%5B%22RX4G54DW%22%5D%2C%22dateModified%22%3A%222025-10-02T06%3A07%3A38Z%22%7D%7D%2C%7B%22key%22%3A%22SVR5RGF7%22%2C%22library%22%3A%7B%22id%22%3A6209884%7D%2C%22meta%22%3A%7B%22parsedDate%22%3A%222014-12%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BPanwar%2C%20R.S.%2C%20Li%20Qiang%2C%20and%20John%20N.T.%20Shanahan%2C%20eds.%202014.%20%26lt%3Bi%26gt%3BMilitary%20Artificial%20Intelligence%20Test%20and%20Evaluation%20Model%20Practices%26lt%3B%5C%2Fi%26gt%3B.%20INHR.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Finhr.org%5C%2Fmilitary-ai-t%2526e-practice%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Finhr.org%5C%2Fmilitary-ai-t%2526e-practice%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22report%22%2C%22title%22%3A%22Military%20Artificial%20Intelligence%20Test%20and%20Evaluation%20Model%20Practices%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22seriesEditor%22%2C%22firstName%22%3A%22R.S.%22%2C%22lastName%22%3A%22Panwar%22%7D%2C%7B%22creatorType%22%3A%22seriesEditor%22%2C%22firstName%22%3A%22Li%22%2C%22lastName%22%3A%22Qiang%22%7D%2C%7B%22creatorType%22%3A%22seriesEditor%22%2C%22firstName%22%3A%22John%20N.T.%22%2C%22lastName%22%3A%22Shanahan%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22reportNumber%22%3A%22%22%2C%22reportType%22%3A%22%22%2C%22institution%22%3A%22INHR%22%2C%22date%22%3A%22Dec%202014%22%2C%22language%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Finhr.org%5C%2Fmilitary-ai-t%2526e-practice%22%2C%22collections%22%3A%5B%22ZDM4MR7K%22%2C%22RX4G54DW%22%5D%2C%22dateModified%22%3A%222025-09-28T07%3A39%3A35Z%22%7D%7D%5D%7D
Meier, Michael W. 2022. “Responsible AI Symposium—Responsible AI and Legal Review of Weapons.” Articles of War, December 27. https://lieber.westpoint.edu/responsible-ai-legal-review-weapons/.
Mimran, Tal, Magda Pacholska, Gal Dahan, and Lena Trabucco. 2024. “Israel-Hamas 2024 Symposium—Beyond the Headlines: Combat Deployment of Military AI-Based Systems by the IDF.” Articles of War, February 2. https://lieber.westpoint.edu/beyond-headlines-combat-deployment-military-ai-based-systems-idf/.
Panwar, R.S., Li Qiang, and John N.T. Shanahan, eds. 2014. Military Artificial Intelligence Test and Evaluation Model Practices. INHR. https://inhr.org/military-ai-t%26e-practice.