Open Access

Autonomous weapon systems and IHL compliance: A constrained legal optimisation problem

   | Mar 11, 2024

Cite

Introduction

Many of the concerns that have been raised about the increasingly military uptake of autonomous weapon systems (AWSs) relate to compliance with international humanitarian law (IHL). A wide range of arguments have been made for and against reliance on autonomous capabilities based on how they are believed to help or hinder the operating state’s efforts to meet its IHL obligations.

Most often, these arguments have been made in isolation or as part of broader advocacy efforts for or against adoption of AWSs which stop short of covering the universe of compliance matters which relate to AWSs. It is then left to the bearers of IHL obligations to piece together the disparate arguments into a coherent structure to answer the overarching question of whether and in what circumstances they can legitimately utilise a particular autonomous capability.

A detailed solution to that problem would be far beyond the scope of a single research article, and the specifics would naturally vary with the legal obligations borne by each state. Instead, this paper offers one additional piece of that larger puzzle. It proposes a simple conceptual framework which can be used to organise and assess legal arguments and reconcile them with each other and with the broader effects that autonomous capabilities will have on military operations.

Specifically, it proposes that the task of integrating autonomous capabilities into an armed force may be viewed as a ‘constrained optimisation’ problem. Constrained optimisation is a class of problem in which the goal is to attain the ‘best’ possible outcome while conforming to one or more constraints, which limit the set of acceptable solutions from which a choice can be made.

The discussion in this paper is in general, high-level terms. It describes the structure of a possible approach to integrating AWSs into armed forces rather than the details of a specific solution. Specific solutions must be tailored to the obligations borne by the state undertaking the assessment and are beyond the scope of a work of this nature.

The legal framework utilised in this discussion is Additional Protocol I (API) to the 1949 Geneva Conventions (1977). At the time of writing, API conventionally binds 174 states parties, with three other states being signatories. Additionally, some relevant provisions of API are considered, with varying degrees of acceptance, to represent customary law which is binding on all states.

The paper is in six substantive parts. Part I provides an overview of the ways in which autonomous systems pose novel and, sometimes, challenging compliance problems for armed forces. Part II briefly outlines the characteristics of constrained optimisation problems and explains why they are a useful model for ensuring legal compliance in the uptake of AWSs. Part III explains the nature of the desired legal outcome that is sought: essentially, an ideal ‘balance’ (Schmitt 2010) between military necessity and humanity. Part IV surveys the principal types of AWS-related constraints which may limit states’ ability to seek that ideal balance. Part V covers the role of weapons reviews. Part VI concludes the whole topic.

Legal compliance and AWS: challenges and opportunities

Integration of increasingly autonomous systems into defence forces has raised some unique concerns about ensuring compliance with applicable legal frameworks, as well as some promising opportunities. The principal considerations are covered in more detail in later parts of this paper. This part sets the scene by outlining why autonomous machines in general give rise to questions about compliance which differ somewhat from those arising from other military technologies.

Unlike many other advances in military technology, machine autonomy does not manifest as a single well-defined class of devices or even as a single technology. In this context, ‘autonomous’ refers to a capability which may be realised through the application of a range of technologies, most notably artificial intelligence and robotics. It is, at its core, the capacity of a hardware or software system to manage its own operation to some significant extent: that is, to operate without needing to receive ongoing guidance or instructions from a human operator (or, at least, with a reduced need for such human interaction) (McFarland 2020, pp. 28–56).

From the perspective of a human operator, machine autonomy is simply a means of control which may be applied to a hardware or software device. It is the control applied in advance of an operation rather than in real time during the operation. The primary effect of making a system autonomous is on the relationship between the operator and the system being operated.

More specifically, machine autonomy means reallocating the performance of specified tasks, away from a human operator and making them instead a function of an autonomous machine. That reallocation is the source of novel questions about how best to ensure, and potentially enhance, compliance with IHL. The burden of complying with IHL belongs to human beings, either individually or via institutions. The challenge is to ensure that responsible humans remain able to discharge their obligations when a regulated function, and even some part of the ‘decision’ to perform that function, is done by a machine (Dignum 2019). The challenge is magnified by the associated problem of managing the expanded influence of weapon system development processes (McFarland and McCormack 2014) on battlefield operations and the increased burden on personnel responsible for reviewing AWSs for compatibility with their state’s IHL obligations (Sanders and Copeland 2020).

There are other potential complicating factors. Autonomous capabilities will often be utilised early in an attack process, such as in an intelligence, surveillance and reconnaissance (ISR) or decision support role which feeds into a human decision and then perhaps back to a task being assigned to another autonomous system. With that fluid transfer of functionality between human and machine, it may be difficult to distinguish the contributions each makes to the battlefield outcome and to ensure that the overall result is consistent with the operator’s IHL obligations.

Underlying that is the tremendous complexity of modern autonomous systems (Helle et al. 2016) and, even more so, of those which are likely to be developed in years to come. It is already well known that exhaustive testing of complex software systems is becoming infeasible (Whittaker 2000) and that the decision processes adopted by artificially intelligent systems often cannot be clearly explained by their operators (Mayer-Schönberger and Cukier 2013, pp. 178-179). The possibility of increased reliance on online learning, whereby an autonomous machine can optimise its own behaviour in response to information acquired while in operation (Linkens and Nyongesa 2002), will further complicate efforts to understand and manage the behaviour of advanced autonomous systems. At the same time, the capabilities of such systems will offer the potential to preserve and improve compliance with IHL in increasingly challenging combat conditions: superior situational awareness through autonomous ISR, increased accuracy, more precise applications of force and computational aids to improve decision-making in high-tempo, high-stress environments may all be available to combatants through use of autonomous technologies.

Constrained optimisation

The task of integrating autonomous capabilities into a defence force therefore amounts to maximising realisation of the opportunities which the technology presents while remaining within all applicable legal limits. That is a form of problem which is broadly dealt with by a process called ‘constrained optimisation’ which, as noted above, seeks to attain the best possible (‘optimal’) outcome while conforming to one or more constraints which limit the set of acceptable solutions from which a choice can be made. For a general introduction to the topic (albeit with a mathematical focus), see Diwekar (2020).

Optimising an outcome in the face of constraints is a process which people and organisations take part in constantly. On the one hand, it encompasses simple tasks like shopping for the cheapest car that has the features we want. On the other hand, it extends to complex under-takings in the design of advanced technological, logistical, economic or organisational systems in which the goal is to realise the best possible performance for the lowest resource cost. In the present context, states have an interest in seeking the optimal balance between their capacity to use military force to protect their interests and their obligation to limit the harmful effects of warfare. At the same time, they must abide by a range of normative, technological, strategic/operational and other constraints on the actions they can take in pursuit of that optimal balance. The ‘solution’ to this optimisation problem would then represent the manner in which AWSs are integrated into a state’s military forces, whether that amounts to extensive adoption of autonomous capabilities, more limited adoption due to some uses being ruled out by the constraints discussed below or even no use of autonomous capabilities at all (where the constraints are such that the optimisation problem has no valid solution).

Although constrained optimisation is typically viewed as a mathematical process, this paper does not propose that there is an unambiguous numerical solution to this essentially legal problem. The discussion herein is based on the observation that the two types of problem are of similar form and that reference to how constrained optimisation problems are solved might provide guidance to parties engaged in balancing and reconciling the many competing considerations that go into ensuring that the use of AWSs complies with relevant law.

The starting point is to identify the quantity for which the ‘best’ outcome is to be found. In the IHL context, that is the reconciliation of two principles. The principle of military necessity ‘permits measures which are actually necessary to accomplish a legitimate military purpose and are not otherwise prohibited by IHL’ (International Committee of the Red Cross n.d.). The principle of humanity ‘operates to protect the population (whether combatants or noncombatants) and its property’ (Schmitt 2010, p. 799) from harm and suffering resulting from armed conflict. IHL as it stands today is essentially directed at achieving a balance between those two, often competing, principles. As Schmitt puts it, ‘IHL represents a carefully thought-out balance between the principles of military necessity and humanity. Every one of its rules constitutes a dialectical compromise between these two opposing forces’ (Schmitt 2010, p. 798).

However, not every hypothetically possible measure may be taken in pursuit of an ideal balance between military necessity and humanity. The realities of the global security environment, the exigencies of combat, the characteristics of autonomous technologies and the rules of IHL combine to restrict the measures that armed forces are legitimately able to pursue. Those constraints must be understood and adhered to, lest the chances of military victory be unduly compromised or legal rules limiting the harm from armed conflict be violated.

Viable solutions for ensuring that use of autonomous capabilities is both militarily effective and legally compliant may be found within the space that is marked out when all constraints are satisfied. The next two sections, which make up the bulk of this paper, discuss in more detail the various competing considerations.

Military necessity and humanity

As indicated above, the overarching purpose of IHL is to achieve a certain balance between two competing interest of states and the international community in relation to conflict. On the one hand, states have a legitimate interest in protecting their national interests through military means when peaceful resolution or prevention of disputes is unsuccessful. The principle of military necessity is the means by which IHL protects that interest (Schmitt 2010, p. 799). On the other hand, states are also obligated to ‘limit the suffering and destruction incident to warfare’ (Schmitt 2010, p. 796). The principle of humanity serves that purpose. The interplay between the two principles has been visible since the earliest days of modern IHL. The 1868 St Petersburg Declaration, for example, in prohibiting the use of certain explosive projectiles, ‘by common agreement fixed the technical limits at which the necessities of war ought to yield to the requirements of humanity’ (Declaration Renouncing the Use, in Time of War, of Explosive Projectiles Under 400 Grammes Weight 1868: Preamble). A complex process of balancing those two principles over the ensuing decades has resulted in the corpus of IHL which guides the conduct of armed conflict today. Those rules impose a wide range of specific requirements and prohibitions on parties involved in armed conflict with the aim of ensuring that the balance is maintained. Even in cases not addressed by a specific IHL rule, the well-known Martens Clause expresses the desire for a balance between military necessity and humanity. The wording of the Martens Clause has changed somewhat since its first appearance in 1899 (International Convention with respect to the Laws and Customs of War on Land 1899: Preamble), but its essence remains intact today. In API art 1(2), it states:

In cases not covered by this protocol or by other international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience.

Despite that the two principles are most often presented as being in opposition, with the principle of humanity acting as a limitation on the measures that may be taken in the name of military necessity, there is not always tension between them. Schmitt has previously noted the synergy between precision strikes and humanitarian benefits (Schmitt 2005, p. 453). Beer (2016) has written more generally about the potential for the principle of military necessity itself to reduce the hazards of war independently of explicit reliance on humanitarian principles. Relevantly for current purposes, Beer (2016, p. 826) also notes the humanitarian benefits of advancements in military technology:

Today’s weapons are much more accurate, with pinpoint targeting allowing for surgical strikes with relatively minimal collateral damage. For example, joint direct attack munitions and global positioning system navigation, which have been added to existing free-falling bombs, has allowed them to achieve an accuracy of 20 feet from 15 miles away, with great potential for decreasing collateral damage.

The integration of autonomous capabilities into military operations will offer further opportunities to enhance both military effectiveness and humanitarian protection, but only to the extent that all constraints imposed by IHL and other factors can be met. The specific constraints that may have to be satisfied are many; the major types are surveyed in the next section.

Legally relevant constraints on military use of autonomous capabilities

The various requirements and restrictions that must be satisfied in the process of integrating autonomous capabilities into military operations can be usefully categorised as either normative, technological or strategic/ operational. No claim is made that the set of constraints mentioned in this section is exhaustive or even that the choice of categories is necessarily optimal. It does, however, represent arguably the most prominent issues of concern that have been raised in the literature so far. The issues listed here go beyond simply ‘ways in which law constrains the use of autonomous capabilities’, although those matters are addressed. It attempts to cover all matters which are important in ensuring that use of autonomous capabilities is not just minimally legally compliant but maximally effective as well as being consistent with the spirit, and not just the letter, of IHL.

(Note that this investigation is concerned only with constraints that affect the legal aspects of the task of integrating AWSs into armed forces. A broader investigation may also address other matters such as the attitudes of military personnel to working with AWS (Galliott 2018), public opinion about AWS and so forth.)

Not all of the constraint types below are entirely independent of each other. Some arguably overlap and may be seen as different aspects of the same underlying issue, particularly among those constraints drawn from different categories. That does not diminish the usefulness of the analysis. Separating out the various aspects of a difficult problem for individual examination can yield useful insights which support a more robust solution.

Normative constraints

In this paper, constraints of a strictly legal nature are grouped under this heading. A broader investigation might also include ethical concerns, policy constraints, political considerations and other sources of normativity here.

Complying with rules of IHL

The principal legal requirement is, of course, to ensure conformance with all applicable rules of IHL. Vast amounts have been written about the expected ability or inability of AWSs and the states which operate them to meet the standards required under modern IHL in various circumstances, and those arguments will not be repeated here. An overview has been provided by McFarland (2020). Only two general observations will be made.

First, to the extent that compliance with IHL is a constraint on uptake of autonomous capabilities, the constraint is to be satisfied by the overall efforts of the operating state, through the combined human-machine systems it utilises in armed conflict, rather than by either component alone.

Second, satisfaction of IHL requirements is as much a matter of how autonomous systems are used as of their inherent capabilities. ‘Use’ of an AWS here refers to the result of two aspects of control which is exerted over weapon systems. The broader aspect is the set of control measures that states and their armed forces take to ensure that lethal force is applied in accordance with the law. These include weapon system reviews such as those required by Article 36 of API; training of personnel; formulation of strategic goals, operational objectives and rules of engagement; analysis and vetting of potential targets and so on. The narrower aspect is the set of ‘direct’ measures associated with operating a weapon: assessing a potential target, aiming and pulling a trigger (or perhaps, in the case of AWS, authorising that a machine-selected target be engaged) and so on. It is the overall effect of these control measures, coupled with the capabilities of the weapon system itself, which must result in an operation complying with rules of IHL (for an example, see Australia’s System of Control and applications for Autonomous Weapon Systems (2019)).

Participants in the debate about regulating development and use of AWSs have, naturally, given most of their attention to the more fundamental requirements of IHL. Most prominent are the principles of proportionality and distinction (including its many aspects such as recognising and accepting surrender (Sparrow 2015), recognising that an adversary is otherwise hors de combat (Umbrello and Wood 2021) and so on). In addition, states operating AWSs must also consider a number of more situational requirements. In specific operations, it may be necessary for a fighting force, whether human, machine or human-machine team to be able to issue warnings to a civilian population, take prisoners and complete other tasks required by law.

Minimum required levels of human intervention

States must also consider the degree to which a human operator can directly intervene in an AWS operation to monitor the activities of the weapon system and, if needed, apply manual control measures. Written IHL does not explicitly state that any specific type or degree of direct human intervention in an operation is needed, but at least one limitation can be identified: personnel must be able to intervene to the extent necessary to ensure they can meet whatever IHL obligations they normally bear in relation to an operation of that kind (Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems 2019: Annex IV(c)).

IHL contains no explicit references to autonomy or autonomous systems and only minimal references to any particular means of controlling weapons. Legal limits on autonomous capabilities in weapon systems must therefore be inferred from the principles, rules and goals of general IHL, but the unique nature of autonomous control makes that challenging.

The key is to recall that the burden of complying with the rules of IHL rests with people, not machines. In some cases, an obligation is formally assigned to individuals. ‘[T]hose who plan or decide upon an attack’ (API: art 57(2) (a)) must undertake a range of precautionary measures aimed at minimising civilian harm when preparing to conduct attacks, and those who conduct the attack must also take ‘constant care’ (API: art 57(1)) to minimise civilian harm. Those responsible persons must retain the practical ability to meet their legal obligations when operating AWSs. They must be afforded the ability to ensure that the weapon systems for which they are responsible behave consistently with the constraints imposed by IHL. That ability rests on two foundations: access to sufficient information about the operation, the behaviour of the weapon and its interaction with targets and the environment and an ability to affect the behaviour of the weapon as required by changing circumstances. That is the capacity for human intervention that is required for compliance with IHL.

A more difficult challenge is how to quantify that requisite capacity for intervention in AWS operations. Some provisions in IHL treaties suggest that a very high standard is required (the clearest example being the aforementioned requirement in API art 57(1) to take ‘constant care’ to spare civilians and civilian objects). In practice, though, some inability to manually intervene in operation of a weapon is tolerated: not all missiles can be actively guided in flight, land and sea mines may be left emplaced for considerable time without direct supervision and so on. The same applies to weapon systems with autonomous capabilities, but identification of the minimum acceptable capacity for intervention must be approached differently. Autonomous control may greatly reduce the need for, or the possibility of, direct human intervention in weapon system activities, without increasing the chance of a proscribed outcome to an operation. This is due to an AWS’s ability to respond ‘intelligently’ to events on the battlefield.

Technological constraints

The second group of constraints on legal use of autonomous capabilities are imposed by the nature of autonomous technology itself. Currently, and for the foreseeable future, this means, primarily, digital computing, with specific concerns arising in the area of artificial intelligence.

Managing operational risk

Concerns about risk in its many forms have, naturally, pervaded the debate about regulating military use of autonomous systems since its earliest days. Forms include the risk that human operators might lose control of their weapon systems, the risk that AWSs might misidentify civilians or civilian objects as legitimate targets, the risk that AWSs might interact with adversaries’ systems leading to escalation of a conflict and so forth. Any new military technology can introduce new sources of risk which military lawyers must consider. Those introduced by autonomous technologies are unique, though, in how they relate to the ways in which weapons are controlled rather than to any specific behaviour on the battlefield or effect on targets.

Operational risks unique to autonomous systems arise, generally, in two ways. The first is as a consequence of the removal of personnel partly or fully from positions of directly operating or overseeing the operation of weapon systems. The second is as a consequence of the nature of the specific technologies that are employed and how those technologies are used.

When a manually operated weapon is replaced with an AWS, the autonomous weapon’s control system ‘steps into the shoes’ of the human operator in respect of whichever functions of the system have been made autonomous. To the extent that the operator moves from being ‘in the loop’ to ‘on the loop’ or ‘out of the loop’ (Schmitt and Thurnher 2013, p. 235 n 12), they may be less able to intervene in a weapon system’s operation the way they might with a manually operated system and so less able to directly enforce compliance with IHL. When a weapon is operated manually and a fault occurs or a mistake is made, human involvement can act to limit the extent of the consequences. That ability is partly or fully missing when a weapon system operates autonomously, as Scharre (2016, p. 5) notes:

If [a manually operated] weapon system begins to fail, the human controller can modify the weapon system’s operation or halt the engagement before further damage is done.

With an [AWS], however, the damage potential before a human controller is able to intervene could be far greater. In the most extreme case, an [AWS] could continue engaging inappropriate targets until it exhausts its magazine, potentially over a wide area. If the failure mode is replicated in other [AWS] of the same type, a military could face the disturbing prospect of large numbers of [AWS] failing simultaneously, with potentially catastrophic consequences.

‘Human operators, being separate from the weapon systems they use and availed of different senses and sources of information, act as fail-safe devices when a weapon system malfunctions’ (McFarland 2020, p. 66). If a machine under the control of a computer malfunctions, it may not recognise an error condition within itself; so, designers and operators must consider how to retain a fail-safe capability where it is deemed necessary.

Failure to adequately manage risks such as these could potentially lead to a conclusion that a rule of IHL has been violated: that a weapon system is inherently indiscriminate (API arts 51(4)(b), (c)), that adequate care was not taken in ascertaining the military or civilian status of a potential target (API art 51(4)(a)), that a disproportionate attack was launched (API art 51(5)(b)) or some other charge which may arise from events on the battlefield.

It is still an open question as to how states operating AWS can best ensure their use remains in compliance with relevant law. The control system of an AWS plays the role of ‘operator’ to some extent (physically, not legally) in respect of whatever functions have been automated. The person who bears legal obligations in relation to those functions must act through the control system to ensure that the weapon system behaves consistently with those obligations and with the state’s overall system of control. That requires the responsible person to monitor for and respond to developments which would unacceptably increase the risk that the weapon’s control system might cause it to act in a way which would fail to meet the obligations borne by the responsible person.

Managing human – autonomous system interaction

Integrating autonomous systems into military operations creates a human-machine system within which specified tasks are allocated to either human or machine elements, but responsibility for completing those tasks as required by law remains with responsible humans. Given the widely differing capabilities of humans and autonomous systems, success in maximising legal compliance (by achieving an optimal military necessity – humanity outcome) depends on optimally allocating tasks to each entity and on carefully managing the interactions between them. Human-automation interaction is a complex field with much ongoing research (e.g., Guzman and Lewis 2019; Liu et al. 2022), but one particularly important factor which determines the success of that interaction is trust (Bach et al. 2022). Trust may be described as ‘an attitude which includes the belief that the collaborator will perform as expected and can, within the limits of the designer’s intentions, be relied on to achieve the design goals’ (Moray and Inagaki 1999). Lewis et al. (2018, p. 136) note:

Studies have shown that trust towards automation affects reliance (i.e. people tend to rely on automation they trust and not use automation they do not trust)...For optimal performance of a human-automation system, human trust in automation should be well calibrated. Both disuse and misuse of the automation has resulted from improper calibration of trust, which has also led to accidents.

Trust must be calibrated to ensure that human operators who are responsible for tasks done in collaboration with autonomous systems are willing to trust the system to perform those tasks, and only those tasks, at which the system is reasonably expected to be competent.

Trust in automated and autonomous systems is complex and multi-faceted, dependent on aspects of both the systems in questions and the people who interact with them (Devitt 2018). On the machine side, it has been shown, naturally, to be strongly influenced by system capabilities and behaviours: reliability, predictability, transparency and the incidence of system faults can all affect the degree of trust which an operator is willing to place in the system.

Trust is also influenced by the perceived risk to the operator of entrusting a task to an autonomous system. ‘People are more averse to using the automation if negative consequences are more probable, and, once trust has been lowered, it takes people longer to re-engage the automation in high-risk versus low-risk situations’ (Lewis et al. 2018, p. 141).

This has significant implications for efforts to ensure compliance with legal obligations in operations involving autonomous systems. It is not sufficient that an autonomous system be designed with the capability of performing some regulated operation such as assessing the military or civilian status of a potential target. The system’s relationship with its human operators must be calibrated such that the human will in fact be willing to use it to perform that operation when required but not become overly trusting or complacent to the point where the system is relied upon inappropriately.

To deviate far from that ideal is to risk a potentially serious violation of IHL. If, for example, a civilian is killed during an operation as a result of an operator’s decision to inappropriately rely on the target selection capabilities of an AWS; the incident could possibly be seen as an indiscriminate attack in violation of API art 51(4):

Indiscriminate attacks are prohibited. Indiscriminate attacks are:

(a) Those which are not directed at a specific military objective;

(b) Those which employ a method or means of combat which cannot be directed at a specific military objective or

(c) Those which employ a method or means of combat the effects of which cannot be limited as required by this protocol; and consequently, in each such case, are of a nature to strike military objectives and civilians or civilian objects without distinction.

On the other hand, failure to make effective use of an autonomous system in the circumstances for which it is intended may be seen as a failure to take sufficient precautions in verifying the status of potential targets or selecting a means of attack, in violation of art 57(2)(a):

Those who plan or decide upon an attack shall:

(i) Do everything feasible to verify that the objectives to be attacked are neither civilians nor civilian objects and are not subject to special protection but are military objectives within the meaning of paragraph 2 of Article 52 and that it is not prohibited by the provisions of this protocol to attack them;

(ii) Take all feasible precautions in the choice of means and methods of attack with a view to avoiding, and in any event to minimising, incidental loss of civilian life, injury to civilians and damage to civilian objects.

Strategic/operational constraints

The third type of constraint arises from the nature of modern armed conflict: the practical requirement to fight efficiently given the exigencies of the combat environment and the need to effectively pursue military outcomes.

The legal concept of military necessity permits armed forces to take steps which are necessary in order to secure military objectives and are not otherwise prohibited by IHL; the precise steps which qualify as such in given circumstances will often be determined largely by the actions of adversaries, the nature of the battlefield and other matters over which an armed force has little or no control. These constraints arise because armed forces have legitimate interests in winning the wars that they fight and that fighting effectively requires focusing one’s efforts on what is militarily necessary in the circumstances. Hayashi (2020, p. 19) notes:

…military necessity in its material sense embodies a two-fold truism. First, it is in the self-interest of each belligerent to do what is militarily necessary and to avoid what is unnecessary. Second, it is against each belligerent’s self-interest to forgo necessities of war or to encumber itself with non-necessities of war. Pursuing necessities and avoiding non-necessities form a morally neutral component of the belligerent’s vocational competence – that is, ‘to get the job done’.

Where such exigencies indicate a military necessity to employ an autonomous capability in a certain way, that acts as a practical constraint on the set of actions which can viably be taken in complying with applicable rules of IHL.

Role of weapon reviews

Legal reviews of weapon systems are one of the primary mechanisms by which inherently unlawful weapons are prevented from entering service. All state parties to API are required to review new weapon systems as set out in article 36:

In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this protocol or by any other rule of international law applicable to the High Contracting Party.

The purpose of the review is to ascertain that the weapon system is capable of being used in compliance with all of the state’s international legal obligations in light of its normal anticipated use. For example, Australia’s review process examines whether a weapon under review is subject to any specific or general prohibitions under Australia’s international legal obligations and ‘whether the weapon is contrary to the public interest, the principles of humanity and the dictates of public conscience’ (The Australian Article 36 Review Process 2018).

In this context, where the task of ensuring compliance with IHL is treated as a constrained optimisation problem, the role of a weapon review may be seen as ensuring that all normative constraints have been satisfied. Satisfaction of the technological and strategic/operational constraints identified above would normally be done in the weapon development and procurement processes but would ordinarily lie beyond the scope of a legal review.

Conclusion

This paper has attempted to provide a broad framework for addressing the challenges that will arise in integrating autonomous capabilities into a modern armed force in a manner which complies with legal requirements while also being sensitive to the realities of the technology in question and the operational environments in which it will be used. Utilisation of a framework such as this will enable a state which is considering the adoption of AWSs to understand the scope for adoption defined by the state’s legal, technological and strategic constraints and to rationally weigh the value of AWS use against its cost.

The framework is deliberately general in nature. In part, this is because, as noted earlier, application by a specific state to a specific autonomous capability is a detailed exercise which is dependent on the technologies and state-specific legal obligations at issue. It is also because all of the significant variables to be considered are either changing or at least are subject to change.

The rapid pace of advancement in the component technologies of autonomous systems is well known and much of it goes beyond the scope of a primarily legal research paper (for an overview, see Congressional Research Service 2020). Suffice to say that the range of feasible applications for autonomous systems and the degrees of autonomy which are realistically achievable in those applications are both expanding and will continue to do so for the foreseeable future.

The applicable law is, as always, less inclined towards rapid change than the technologies that it purports to regulate but it is not static. Discussions about whether and how to regulate the development and use of AWS, beyond that provided by current IHL and other bodies of law, have been ongoing for at least 9 years under the auspices of the United Nations in Geneva (United Nations Office for Disarmament Affairs n.d.). The progress of those discussions has been very slow, and there is little indication that any substantive regulatory measures will result. However, even in the absence of any new written law, instances of state practice are accruing and opinio juris is, presumably, being formed, such that customary rules relating to the use of autonomous capabilities may 1 day crystallise. According to Schmitt (2012-2013, pp. 179-180):

‘…interpretive endeavors seldom survive intact because international law, crafted as it is by states through treaty and practice, necessarily reflects the contemporary values of the international community. As these values evolve, so too will international law’s prevailing interpretations’.

Finally, the nature of conflict is undergoing significant change. IHL was originally intended to regulate armed conflict between state armed forces in which the goal was to secure military victory over the enemy, but according to McFarland (2021),

‘the international security and conflict environment has changed radically…Formally declared wars and large-scale interstate conflicts have largely given way to smaller, low-level conflicts, ‘grey-zone’ activity and non-international and internationalised conflicts; new modes of fighting including cyber-attacks and targeted killings have emerged; new technologies such as remotely piloted aircraft have expanded the battlefield in unprecedented ways; and all this is taking place in a world which is interconnected to a degree never before seen, with communications and surveillance technologies feeding vast volumes of information to any State or non-State actor with the ability to tap into it’.

That change is impacting autonomous system integration efforts in two ways. It alters the types of operations to be undertaken and hence the types of autonomous capabilities that need to be developed and for which legal compliance must be assured. It also has a more direct impact on the compliance mechanisms of IHL. As Lamp has pointed out, some forms of modern conflict ‘[differ] in crucial respects from the conception of war that underlies IHL’s paradigm of compliance, ie the set of assumptions and expectations that inform the way in which the provisions for the implementation and enforcement of IHL are designed. (Lamp 2011, p. 226)’ The rise of non-state actors, intra-state conflict, conflict in and involving failed states and conflict in which the primary goal might not be to secure military victory over an enemy force (Lamp 2011, p. 235) all challenge the premise underlying IHL compliance mechanisms. They may also be driving further development of the law which will need to be considered by parties seeking to utilise autonomous systems in those new forms of conflict.

A conceptual framework such as that described in this paper is valuable in navigating these changes. A structured approach helps to ensure thorough consideration of the many factors that contribute to a high degree of compliance with IHL in the complex task of integrating autonomous capabilities into modern armed forces.

eISSN:
1799-3350
Language:
English
Publication timeframe:
Volume Open
Journal Subjects:
History, Topics in History, Military History, Social Sciences, Political Science, Military Policy