AI in Maritime Operations: Legal under War Law?
Artificial Intelligence (AI) is an increasingly complex phenomenon in international law due to the challenge of its regulation.1Margaret A. Boden, “Artificial Intelligence: A Very Short Introduction,” School of Cognitive and Computing Sciences (1996) Its incorporation in maritime operations has transformed them into a more cost-effective, efficient, and environmentally sustainable endeavour. Though its incorporation has not been uniform across jurisdictions, the adoption of AI technologies is becoming globally pervasive. However, this transformation’s main contention is regarding international sources of war law stressing human control, indirectly prohibiting reliance on AI in maritime operations.
This article will initially discuss the more common types of AI technologies in national maritime operations. Furthermore, it will assess the use of AI technologies against international war laws to gauge their legality, focusing on International Humanitarian Law, specifically the Geneva Convention and Additional Protocols and the San Remo Manual. Lastly, it will analyse challenges to the existing legal framework: the UNCLOS Flag System.
Common AI Technologies in Maritime Operations and Their Legality
The most increasingly common form of AI-powered technologies in maritime operations is ‘unmanned maritime systems’, which navigate and operate on the water surface either without or with limited human intervention. There is little acceptance over a distinct name for these systems, with some countries referring to them as ‘maritime drones’ or ‘smart ships,’ or most commonly, ‘autonomous vessels.’2Alan M. Weigel & Thomas H. Belknap Jr., ‘Autonomous Vessels: Legal, Regulatory, and Insurance Issues’ (2020) 3 The Journal of Robotics, Artificial Intelligence & Law (Fastcase) 163
Unmanned maritime systems are increasingly being relied upon by nations for commercial and security purposes. In 2013, the US Navy had a fleet of seventeen autonomous ships.3Tuneer Mukherjee, ‘Securing the Maritime Commons: The Role of Artificial Intelligence in Naval Operations,’ (2018) ORF Occasional Paper In 2021, Congress approved a budget of $125 million for research on ‘long-duration autonomous ship operations.’ Resultingly, the US Navy initiated a 20-year plan with a privately-owned corporation to create a fleet of 120 autonomous vessels, along with autonomous undersea vehicles for shipment.4David Molina Coello, ‘Is UNCLOS Ready for the Era of Seafaring Autonomous Vessels?‘ (2023) 10 J Territorial & Mar Stud 21 Similarly, the European Union (EU) created the Maritime Unmanned Navigation through Intelligence in Networks (MUNIN) to research the utilization of unmanned surface ships for commercial use. Such vessels are guaranteed to decrease the risk of collisions tenfold, apart from their apparent profitability compared to the current system,5Ibid. mainly due to reduced labour and decision costs.
The operation of these autonomous vessels is against the idea of human control repeatedly enunciated by the Geneva Conventions and their Additional Protocols. Though the principle is not explicitly stated word-for-word, its foundations and implications are embedded in various provisions of the conventions. For example, Article 36 of Additional Protocol I obligates states to review new weapons, means, or methods of warfare to determine their compliance with International Humanitarian Law (IHL). The key principles of IHL include those of distinction, proportionality, and precautions in attack. These principles are not necessarily violated by the operation of autonomous vessels but by the lethal autonomous weapons (LAWs) on board. Though conventional LAWs – such as air defence systems – are also contentious, they are activated by trained personnel only after factoring in all potential costs. However, automated vessels have effectively replaced human decision-making and activate LAWs onboard themselves. Therefore, the autonomous vessel creates certain indicators that the LAWs “detect in the environment against a target profile” and accordingly strike.6International Committee of the Red Cross (ICRC), ‘What You Need to Know About Autonomous Weapons,’ (2022) The following sections will analyse their compliance with different sources of IHL, including its principles and alternative sources, such as the Martens Clause and the San Remo Manual.
The IHL Principles: Distinction, Proportionality and Precaution
During armed conflict, state parties are required to distinguish between combatants and military objectives, and civilians and civilian objects.7Nils Melzer, ‘The Principle of Distinction Between Civilians and Combatants,’ (2014) The Oxford Handbook of International Law in Armed Conflict. https://doi.org/10.1093/law/9780199559695.003.0012.[/mfn] The principle of distinction requires weapons to target the former but not the latter. However, automated vessels profile targets and activate LAWs while relying on objective indicators like distance or visuals. Without human intervention to subjectively differentiate between military and civilian objects, automated vessels cannot accurately distinguish between, for instance, a warship and a civilian boat.
Similarly, automated vessels also violate the principle of proportionality, which requires a proper balance between the objective, means, and consequences of an action.7Georg Nolte, ‘Thin or Thick? The Principle of Proportionality and International Humanitarian Law,’ (2010) Law and Ethics of Human Rights, Vol. 4, Issue. 2 https://doi.org/10.2202/1938-2545.1050.[/mfn] The foundational idea is to balance the military advantage sought and the collateral damage to civilian life and objects. However, the reduced human oversight in the use of force creates a higher risk of disproportionate attacks as automated vessels have a limited ability to assess specific circumstances and make nuanced decisions regarding the proportionality of an attack.
Thirdly, the IHL principle of precaution mandates nations to deploy large-scale surveillance measures for due precaution, which may be built on decades of intelligence or on a split-second judgement.8Frederik Rosen, ‘Extremely Stealthy and Incredibly Close: Drones, Control and Legal responsibility,’ (2013) Journal or Conflict & Security Law. However, exercising precaution is often impractical as warning protected persons can compromise tactics. Regardless, states may be prosecuted for committing a war crime if they could have warned civilians to evacuate but failed to do so. Though automated vessels violate this principle, other AI-powered technologies also maintain it. For instance, maritime surveillance through AI provides support to “maritime security, illegal bunkering, tracking of marine oil transportation, and search and rescue.”9Ziaul Haque Munim, Mariia Dushenko, Veronica Jaramillo Jimenez, Mohammad Hassan Shakil and Marius Imset, ‘Big Data and Artificial Intelligence in the Maritime Industry: a Bibliometric Review and future Research Directions,’ (2020) Maritime Policy and Management, Vol. 47, No. 5, 577-597 Incorporation of AI in surveillance is becoming increasingly common, with the Singapore Port and IBM collaborating on SAFER, a project “utilizing machine learning to predict vessel arrival times, potential traffic hotspots, unusual behaviour, and illegal bunkering.”10Yeo, G., S. H. Lim, L. Wynter, and H. Hassan, ‘MPA-IBM Project SAFER: Sense-Making Analytics for Maritime Eevent Recognition,’ (2019) Interfaces 49 (4): 269-280
Similarly, optimizing cargo handling and operations through AI technologies not only increases surveillance but also improves efficiency. For instance, port community systems (PCS) are “an inter-organizational system that electronically integrates heterogeneous compositions of public and private actors, technologies, systems, processes, and standards within a port community.”11Ziaul Haque Munim, Mariia Dushenko, Veronica Jaramillo Jimenez, Mohammad Hassan Shakil and Marius Imset, ‘Big Data and Artificial Intelligence in the Maritime Industry: a Bibliometric Review and future Research Directions,’ (2020) Maritime Policy and Management, Vol. 47, No. 5, 577-597 Therefore, though automated vessels with LAWs violate the IHL principles of distinction and proportionality, other AI-powered technologies – such as maritime surveillance software – in maritime operations promote the principle of precaution.
Additional Sources of War Law: the Martens Clause and San Remo Manual
In addition to the IHL principles, automated vessels also violate the Martens Clause, found in the Preamble of the 1949 Geneva Conventions. It reaffirms the importance of humanity and public conscience in situations not explicitly covered by the Conventions, including the incorporation of AI in maritime operations. Automated vessels and LAWs can potentially violate the Martens Clause by delegating critical decision-making to machines, disregarding the value of human judgement in armed conflict. This violates the ‘dictates of public conscience’ by potentially leading to the disproportionate use of lethal force, failure to distinguish between combatants and civilians, and a lack of necessary precautions.
Similarly, autonomous vessels are in contravention of certain provisions of the San Remo Manual, a warfare manual regarding international law during armed maritime conflicts. For instance, Principle 11 emphasizes the need for distinction and military necessity, requiring that AI technologies used in maritime operations clearly differentiate between civilian and military targets. Similarly, precautions in attack are stressed to minimize harm to civilians and civilian objects under Principle 15, highlighting the importance of programming and operating AI systems with necessary precautions. Lastly, Principles 18-21 provide rules on targeting, including verification of military nature, proportionality, and assessment of military advantage. States must adhere to these when employing AI systems in maritime operations, especially those lethal in nature. Therefore, operating autonomous vessels with LAWs is against the Martens Clause, the San Remo Manual, and the general IHL principles of distinction and proportionality.
Challenges to Existing Legal Frameworks
Under the United Nations Convention on the Law of the Sea, 1982 (UNCLOS), the flag system mainly regulates territorial waters and the high seas. Article 2 of UNCLOS confers jurisdiction on coastal States for acts committed in their waters. This territoriality principle extends to the high seas as Article 92 of UNCLOS states that “each ship should fly the flag of a State with which it has a link.”12David Molina Coello, ‘Is UNCLOS Ready for the Era of Seafaring Autonomous Vessels?’ (2023) 10 J Territorial & Mar Stud 21 Consequently, each ship follows the laws of its flag State, even when navigating the high seas. This rule is generally understood as the responsibility of each ship’s master and crew. Nevertheless, the same flag system is exported to automated vessels due to the lack of regulation for AI integration in maritime operations. Therefore, this section will discuss the inadequacy of the flag system of UNCLOS for partially and fully autonomous vessels.
Partially autonomous vessels lack onboard crews and rely on sensors for navigation. Tech companies operate and control these vessels remotely from different jurisdictions. Consequently, UNCLOS is unable to regulate them due to this jurisdictional divide. While the flag state system suggests that such vessels should adhere to their host state’s laws, decision-making primarily rests with privately-owned remote-control centres, which have no affiliation with the ship’s flag state. As a result, the flag state system fails to address the disparities between the shortcomings of remote-control centres and the wrongly sanctioned states.
The accountability challenge intensifies with fully autonomous vessels, where machine-learning systems control the ships. These systems are created by private corporations and sold to clients worldwide. Resultingly, both privately and state-owned fully autonomous ships operate on the same machine-learning systems, raising liability concerns. For example, if a system failure leads to a multi-party collision in international waters, the responsible company should ideally be held liable. However, under the flag state system of UNCLOS, liability would be assigned to each flag state of the involved ships, even though the collision resulted from the failure of the shared system. These issues are particularly challenging for LAWs as machine-learning systems operate their activation. Consequently, this violates the principles of distinction and proportionality due to the subjective standards required to gauge them. Therefore, the flag state system of UNCLOS misplaces liabilities concerning partially and fully automated vessels.
Potential redress may be found in UNCLOS Article 94, which holds flag states accountable for overseeing ship administration and technical matters. If this provision is utilized, flag states can be responsible for adequately evaluating decision-makers. Although it doesn’t directly address decision-maker liability, it encourages states to diligently outsource maritime affairs, especially with issues pertaining to LAWs. Presently, UNCLOS sanctions flag states, making it an ineffective regulatory system for privately operated partially and fully autonomous vessels.
Recommendations
It is complicated to bring LAWs in compliance with IHL, as their essence fundamentally goes against the idea of human control. However, certain mechanisms can aid in making AI integration into maritime operations more legally compliant. Though there is a preexisting obligation to legally review weaponry under Article 36 of Additional Protocol I to the Geneva Conventions,13Neil Davidson, ‘A Legal Perspective: Autonomous Weapon Systems Under International Humanitarian Law,’ (2017) UNODA Occasional Papers, No. 30. more stringent criteria must be drawn up for LAWs to comply with IHL. For instance, a requirement for LAWs to rank higher on indicators regarding predictability and reliability would be fruitful. High scores of predictability and reliability would incriminate that the LAWs will perform well consistently, certainly mitigating the possibility of a poor distinction between military and civilian objects and disproportionate attacks.
Furthermore, installing LAWs solely on partially autonomous vessels and banning their operation on fully autonomous ships would bring the weaponry in compliance with IHL. Though these ships operate without an onboard crew, they are manually controlled by remote control centres. Therefore, there is a high degree of human intervention in the activation stage, where the viability of the weapon’s strikes is assessed against potential threats. This step is usually taken by trained personnel, with appropriate situational awareness of the operational environment.14ibid. If the LAWs were on fully autonomous ships, a machine-learning system would be undertaking the activation stage. Therefore, the activation of LAWs on partially autonomous ships would fulfil the IHL principles as it includes subjective assessments of threats, potential risks to non-combatants, and alternative precautionary measures.
The incorporation of AI in maritime operations is becoming increasingly pervasive as it is efficient and cost-effective, with the US at the forefront of this transformation. However, AI-operating ships with LAWs have led to blatant violations of the IHL principles of distinction and proportionality. Nevertheless, the transformation has upheld the principle of precaution through automatic surveillance systems which support maritime security. Furthermore, autonomous vessels with LAWs violate the Martens Clause, which stresses the necessity of human judgement decision-making. Similarly, their operation contravenes various principles of the San Remo Manual, especially Principle 15, which indirectly highlights the importance of programming AI systems with necessary precautions. Despite these violations, the current UNCLOS flag state system inadequately assigns responsibility to states, even when private entities have exclusive control over a ship’s operation. Therefore, modifications to UNCLOS provisions that consider the growing delocalization with the rise of automated vessels are necessary to hold the operators of remote-control centres and developers of machine-learning systems liable.
The opinions expressed in the articles on the Diplomacy, Law & Policy (DLP) Forum are those of the authors. They do not purport to reflect the opinions or views of the DLP Forum, its editorial team, or its affiliated organizations. Moreover, the articles are based upon information the authors consider reliable, but neither the DLP Forum nor its affiliates warrant its completeness or accuracy, and it should not be relied upon as such.
The DLP Forum hereby disclaims any and all liability to any party for any direct, indirect, implied, punitive, special, incidental or other consequential damages arising directly or indirectly from any use of its content, which is provided as is, and without warranties.
The articles may contain links to other websites or content belonging to or originating from third parties or links to websites and features in banners or other advertising. Such external links are not investigated, monitored, or checked for accuracy, adequacy, validity, reliability, availability or completeness by us and we do not warrant, endorse, guarantee, or assume responsibility for the accuracy or reliability of this information.
Zaria Adnan
Zaria Adnan is currently enrolled in the BA-LLB programme at LUMS. She has previously interned at RSIL and Khosa & Rizvi Advocates as a research and legal intern, respectively. Her research interests centre around company and environmental law, with a focus on sociolegal intersections.