Introduction
The introduction of Artificial Intelligence (“AI”) has revolutionised modern warfare. As AI becomes more accessible, a country’s power in the domain of war and politics will not just be limited to the military it equips and the economic strength it possesses. With the existence of a legal vacuum in the regulation and control of AI in warfare, developed nations enjoy unfettered power in the international domain, often at the expense of economically weaker states, especially those engaged in war.
This piece critically examines the use of the Lethal Autonomous Weapon System (“LAWS”) in warfare. It contends that there is a need for a binding legal instrument to prohibit such weapons, highlighting the inadequacy of the current International Humanitarian Law (“IHL”) framework. In Part I, the piece seeks to highlight the dangers that the use of autonomous weapons poses in warfare, highlighting how there has been a power asymmetry not only in the development and use of such weapons, but also in attempts to not have a legally binding treaty to prohibit LAWS. Secondly, it contends that the current IHL framework is insufficient to deal with the development and use of such weapons. It highlights how the lack of codification is a major problem, allowing powerful nations to have unfettered power. Finally, in Part II, it seeks to identify elements that need to be considered while developing a legal framework to ensure strict regulation and better enforceability of IHL in warfare. The goal of the piece, however, is not to provide a comprehensive legal framework to be followed by nations nor does it contend that the areas it lists to be included in the framework are exhaustive in nature.
AI in Warfare
While LAWS have not been notably employed in warfare so far, they are not a distant reality, with major military powers like the US, China, and Russia actively researching, testing and developing weapons with autonomous capabilities. As instruments of war, they are capable of mass destruction, without any human control. While it seems useful for a country engaged in war, it is pertinent to note that these weapons are being developed and tested by economically powerful, developed nations. The resulting technological asymmetries would severely disadvantage the economically weaker, smaller nations engaged in war. This problem is exacerbated by the lack of a binding legal instrument that imposes sanctions on development and use of autonomous weapons. These weapons, additionally, would make it easier for countries to get involved in conflicts. This is because it would eliminate the human cost of soldiers dying in war for the nations developing them, providing significant political leverage to the leaders of those nations and in turn, encouraging development and use of LAWS.
Developed nations like Russia assert that autonomous weapons are not a “reality in the near future”. However, this characterisation is inconsistent with existing advancements in Ukraine and Israel. Ukraine, for instance, is dedicated towards working with ground, sea and aerial drones with the launch of the Unmanned Systems Forces. While the use of such drones initially provided Ukraine with strategic advantage, the scale of Russian investment in the war has transformed their use from a tactical choice to an operational necessity. Similarly, Israel has been using AI-driven systems like Lavender, which has operated with a 10% error rate. These weapons lead to several Palestinians being labelled as “militants” for potential airstrikes, a common risk that would arise if an instrument of war is completely algorithm based, with no human judgment in place.
While autonomous weapons development in Ukraine and Israel directly challenge the claims made by developed nations such as Russia, what is perhaps more interesting is that Russia itself has been actively “researching, developing, and investing in autonomous weapons systems and has made military investments in artificial intelligence and robotics a top national defense priority.” This contradiction highlights a broader trend: powerful nations like Russia, The US, China, and the UK consistently oppose proposals to negotiate for a binding treaty, stating that the current legal framework has sufficient restrictions that have the capacity to fully cover autonomous weapons systems. However, as the next section explores, these frameworks are grossly insufficient, allowing such nations to get away with their show of power without any major repercussions, severely disadvantaging smaller nations and innocent civilians.
Current Legal Status
Currently, there exists no universally agreed definition of LAWS, creating a significant legal vacuum in IHL. This allows for unfettered exercise of power by powerful nations. IHL, even without accounting for autonomous weapons, already lacks accountability for unintended civilian harm. . For instance, under IHL, an attack that incidentally results in civil harm, despite being foreseeable may still be deemed lawful, provided that it satisfies the principle of proportionality, regardless of the extent of harm inflicted upon civilians in a war-torn area. With the introduction and development of AI weapons, this accountability gap would widen because it would magnify the impact of such actions and increase the precision with which warfare would be carried out, allowing States to bypass legal repercussions despite causing damage to innocent civilian lives. The IHL framework prohibits attacks that result in incidental harm to civilians or civilian objects when such harm is excessive in relation to the anticipated military advantage as established in S.51(5)(b) of Additional Protocol I. While the text of the provision does not explicitly reference the principles of foreseeability and control, these concepts are inherently embedded within the broader framework of IHL. If the user of an autonomous instrument of war is not able to reasonably foresee actions that would trigger use of force, and has no control over the effect of the weapon, then the action would contravene this prohibition. This interpretation does not directly stem from the text of Article 51(5)(b) but is instead grounded in the fundamental principles of IHL, requiring military actions to adhere to the principles of proportionality, distinction, and precaution. Each of these principles implicitly necessitates both foreseeability of consequence and meaningful human control over these weapon systems.
The current legal framework faces significant challenges in effectively regulating the development and deployment of LAWS and other autonomous instruments of warfare. The 1977 Additional Protocols to the Geneva Convention, for instance, while not being drafted keeping autonomous weapons in mind, has certain provisions that can be used to incorporate such instruments of war. For instance, Article 48 mandates a clear distinction between military combatants and civilian populations to ensure that no innocent and vulnerable person is harmed in armed conflict. The primary challenge, therefore, is not that the framework is inherently insufficient, but rather the complexity of its application to LAWS, which requires careful interpretation and legal development. AI driven military systems at present may struggle with contextual judgment needed to reliably distinguish between combatants and civilians in complex situations- a situation which often requires nuanced human judgment. However, an argument can be made that this represents a technological challenge rather than an inherent failure of the IHL framework to incorporate autonomous weapons. Furthermore, at the heart of the provision, lies the objective of preserving human dignity and minimising suffering. While autonomous systems function on the basis of algorithms that can incorporate certain ethical constraints, a critical question remains: do these systems have the ability to make proportionality and foreseeability assessments, and exercise precaution in a manner that ensures compliance with the broader IHL framework?
There are several other provisions of the Additional Protocols that may be insufficient to incorporate AI weapons. Article 86 for instance, deals with holding commanders accountable for war crimes. This line is blurred when it comes to autonomous weapons. The question that arises here would be that of accountability. Who is to be held responsible if an autonomous instrument of war commits a war crime? The answer is unclear, allowing nations to bypass legal repercussions. Furthermore, Article 51(5)(b) mandates military action to be proportional to military goals, the contextual understanding of which would be lacking in AI weapons. This would result in extremely high civilian casualties and widespread destruction without just cause.
Given the insufficiency of the current IHL framework, efforts have been made towards regulation of such autonomous systems of warfare. For instance, the Convention on Certain Conventional Weapons (“CCW”), has attempted to adopt a two-tier approach. The first one focuses on prohibition, deeming certain applications of autonomous weapon systems unacceptable especially if they target humans and have unpredictable effects due to lack of human control. The second is regulation, where the CCW is considering spatial and temporal limits to maintain “meaningful human control” in design and operation of autonomous weapons. However, despite discussions taking place, there exists no binding treaty yet due to active resistance from certain nations.
Non-state actors, too, have equal stake in the issue that LAWS pose and are also taking active steps to actively work on legal solutions for the same. For instance, Human Rights Watch and other NGOs launched the “Campaign to Stop Killer Robots” where it acknowledged that “killer robots” pose a grave danger to humanity and hence there is an urgent requirement for multilateral action.
This absence of a binding regulatory framework allows powerful nations to exploit technological asymmetries, furthering geopolitical instability. While existing IHL principles provide some guidance, they remain insufficient in addressing the unique threats of AI-driven warfare. Part II will propose a normative framework for regulating LAWS, focusing on accountability, compliance mechanisms, and safeguards to ensure meaningful human control. By doing so, it aims to offer a structured legal approach to mitigate the dangers posed by autonomous weapons in modern warfare.
Click here to read part II.
Anushka Mahapatra is an undergraduate law student at NLSIU, Bangalore.
Picture Credit: 2018 Russell Christian/Human Rights Watch