CENTRE FOR JOINT WARFARE STUDIES

An Autonomous Think Tank Promoting Integration and Jointness as a Synergistic Enabler of National Power, Providing Policy Alternatives Through Research and Debate
Latest Updates

CENTRE FOR JOINT WARFARE STUDIES

AI in Warfare: The REAIM Summit and India’s Approach

Introduction

Artificial Intelligence (AI) is poised to revolutionise the battlefield today, just as gunpowder, tanks, aircraft, nuclear bombs and the like transformed the nature of warfare in previous eras. Once considered a hypothetical or futuristic strategy, AI has made its way into real-world conflicts, with war in Ukraine and Gaza serving as prime examples. The Russia-Ukraine conflict and the war in Gaza witnessed the ‘baptism’ of AI technology in real combat scenarios.[1]

A year of devastating war in Gaza has killed around 42,000 people[2], and the Russia- Ukraine conflict, now spanning 2½ years, has resulted in around one million Ukrainians and Russians being killed or wounded.[3] Israel used a constellation of AI systems including the ‘Gospel’ and ‘Lavender’ systems against the people of Gaza with little or no human intervention. Both Russia and Ukraine used AI tools in the conflict to seek strategic advantage over the other. Autonomous drones were also active in Azerbaijan’s attack on Armenia over the long-contested Nagorno-Karabakh region.[4]

The world is witnessing increased competition between nations to harness the power of AI for military advantage. The AI military spending of the US nearly tripled from 2022 to 2023.[5] The US released an AI Adoption Strategy in 2023, launching several AI initiatives and investments. China, another rising contender in the development of AI and autonomous weapons systems, has outlined its ‘Next Generation Artificial Intelligence Development Plan’ and has declared its aspiration to become the world leader in AI by 2030.[6] Similarly, Vladimir Putin declared that “Artificial intelligence is the future not only of Russia but of all of mankind…… Whoever becomes the leader in this sphere will become the ruler of the world”.[7] Smaller countries are also committed to developing AI, realising the immense advantage the technology provides to those who possess it.

While the military applications of AI are taking various shapes, the discussions on ethical AI, accountability, and transparency in military use are already happening globally. The second summit on the Responsible Use of Artificial Intelligence in the Military Domain (REAIM) held in Seoul, South Korea, in September 2024, is a recent step in this regard. The summit acts as a platform for the global community to contemplate the necessity to balance harnessing the potential of AI while ensuring its responsible use in conflict settings. India has not endorsed the summit outcomes and is adopting a ‘wait and watch’ approach. The article seeks to understand the military use of AI, the concerns regarding the use of AI, the key discussions at the REAIM summit and the path forward for India in its AI strategy.

Applications of AI in Warfare

The scope and extent of AI applications are so wide today that despite the diverse range, the users of AI have classified all existing applications under seven distinct categories. These seven patterns- goal-driven systems, autonomous systems, conversational/human interactions, predictive analysis, hyper-personalisation and decision support- have revolutionised military operations in recent years.[8] These patterns offer advanced capabilities for functions like object detection, decision-making assistance, conversational interactions etc.  AI-powered military systems have several advantages compared to conventional systems. They possess the ability to identify potential targets quickly by processing huge amounts of data, navigate and carry out operations in hostile situations independently, coordinate multiple units working for mission success etc.[9]

Figure 1: The applications of AI in the military[10]

Autonomous Weapons and Targeting

The role played by autonomous systems comprising drones and unmanned vehicles in combat is surprising, minimising the need for humans on the battlefield. With minimal direct control from the operators, these systems carry out surveillance, reconnaissance and combat operations.

In the 11-day war in Gaza in 2021, Israel claims to have fought its first AI war, involving machine learning and advanced computing. The Israeli Defence Force (IDF) has used the Lavender system to identify Hamas and Palestinian Islamic Jihad targets.[11] A report published by the Israeli-Palestinian platform +972 Magazine and Local Call gives the testimonies of six Israeli intelligence officers involved in utilising the Lavender system which helps to understand the extent of military use of AI by Israel. These officers confirmed that the Lavender systems played a significant role in Israel’s attack on Gaza and that at the early stages of the war, Lavender had identified approximately 37,000 Palestinians as suspected militants.[12] They also revealed that the human intervention in the machine’s decisions was negligible that they spent only 20 seconds before approving a target strike- primarily to confirm that the target was male, even though they were aware of the 10% error in the system, and the targeted individuals may have only a tenuous or no link to the militants.[13] The Gospel AI is another system used, tasked with targeting infrastructure and buildings. Another system ironically named ‘Where’s Daddy?’ has been used to track targeted individuals to their homes and execute bombings when they enter their family residence, thereby focussing on mass casualties.[14] In addition to the identification of targets, the IDF is using AI to map the tunnels built by Hamas using UAVs that operate underground.[15]

Intelligence Gathering and Surveillance

Intelligence gathering and surveillance have been the two most coveted applications of AI in the Russia-Ukraine war. Ukraine used AI to integrate target and object recognition with satellite imagery, geolocating and analysing open-source data to pinpoint the exact locations of Russian soldiers, weapon systems and their movements.[16] Both armies used AI extensively for huge data processing on the battlefield. AI-powered cameras and sensors were integrated into the UAVs used extensively for reconnaissance missions, providing real-time information. The AI systems were equipped to recognize weapons, troop movements etc. enabling decision-making based on the collected data.[17] This helps the drone operators to neutralise the targets effectively. Loitering munitions, or “kamikaze drones,” were also increasingly used by both Russia and Ukraine to augment their artillery capabilities, enabling precision strikes in the war.

AI in Military Logistics

Military logistics is a key enabler in executing military operations. The importance of logistics has been demonstrated during the Russia-Ukraine war. In the logistics part, AI can help in scheduling maintenance, foreseeing shortages in military warehouses, anticipating challenges in resupply operations etc. Additionally, logistics vehicles with AI navigation can be used to deliver supplies and evacuate casualties.[18] Effective logistics support is inevitable during war situations for force movements, and protection of the personnel and weapons systems, which are key to achieving tactical, operational and strategic objectives. The Russia-Ukraine war exposed significant weakness in the logistics capabilities of Russia at the operational level, negatively impacting Russia’s strategic goals. It highlights that wars are not won by good strategy alone but only when supported by good logistics.[19]

Concerns on the Use of AI

While AI holds immense promise on paper and through algorithms, its application in warfare remains uncertain as even a minor error in a war scenario could have catastrophic consequences. The major concerns in the military use of AI are as follows.

  • Risk of Error and Unintended Consequences: AI systems often struggle to differentiate between threats and innocent individuals, creating room for potentially devastating errors. This misidentification of targets results in unintended casualties and potential violations of international law.
  • Vulnerability to Cyber Threats: AI systems can be susceptible to hacking or manipulation by adversaries. If compromised, these systems could be turned against their users, leading to unpredictable and dangerous consequences on the battlefield.
  • Technical Limitations[20]: AI systems require extensive training, requiring huge amounts of relevant, reliable, high-quality data. The chances of bias from the data they are trained on are also possible. Another aspect is that no conflict is the same, the dynamics of every war situation are different; the geography, the weapons and tactics employed etc. will vary in each conflict situation. This shows that there is neither a theoretically valid model of combatant that can be used to train an AI system nor a computationally proper way for AI to make decisions.

  • Ethical Concerns: A significant challenge is determining responsibility when AI makes a mistake. It raises the question of who is accountable- the AI system itself, the developers of it, or the military personnel who deployed it. This creates an ethical dilemma that remains unresolved.
  • Question of Human Agency[21]: The central issue revolves around human agency; ensuring that humans retain the ability to make critical targeting decisions, rather than benefitting from the time saved by automation. In warfare, particularly in nuclear scenarios, even the slightest error can lead to unimaginable destruction. Therefore, human oversight is crucial, with AI decisions being carefully reviewed by humans who should have the final say.
The REAIM Summit and India’s Stand

As there is an expanding adoption of AI by the world militaries, the political efforts to regulate the use of this revolutionary technology in warfare are also intensifying. Ukraine and Gaza have become ‘AI labs’[22] where AI turns these war zones into experimental grounds, necessitating the urgency to establish international norms and regulatory frameworks that govern the use of AI in warfare.

Raising the slogan ‘Responsible AI for Safer Tomorrow’, 80 countries across the world participated in the REAIM summit and 61 endorsed the non-binding agreement including major nations like the US, UK, China, Japan, Australia etc. Hosted by the Ministry of Foreign Affairs (MOFA) and the Ministry of National Defense (MND) of the Republic of Korea, the summit was co-hosted by Kenya, Netherlands, Singapore and the UK, with representatives from various governments, international organisations and civil society.

The first meeting of the REAIM took place in Hague in 2023. Following the summit, the US launched its ‘Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy’.[23] The UN has also called for initiating a legally binding treaty by 2026 to ban the use of Lethal Autonomous Weapon Systems (LAWS). The outcome of the 2024 summit was endorsed as the ‘Blueprint for Action’, a document outlining the guidelines to be followed for the use of AI in the military. The document stresses the necessity of human involvement in the development, deployment and use of AI in the military domain and calls for the inevitability of international cooperation.

At the summit, South Korean Defence Minister Kim Yong-Hyun said, “As AI is applied to the military domain, the military’s operational capabilities are dramatically improved. However, it is like a double-edged sword, as it can cause damage from abuse”.[24]
International law, human rights conventions, the UN charter, etc, must be of the highest consideration, ensuring global peace, security and stability. There should be no escalation of conflicts by the use of AI. The summit, even though not binding, expects the endorsing nations to formulate security measures and establish robust control to prevent the misuse of AI.[25]

India has taken a cautious ‘wait and watch’ approach, having not endorsed the ‘Blueprint for Action’ and previously not supporting the ‘Call for Action’ at the Hague Summit. India is a beginner in the use of AI for defence purposes. An AI task force was established by the Government of India in 2018 and the Defence AI Council and Defence AI Project Agency were created in 2019. The government published a list of 75 priority areas of AI in defence including cyber security, autonomous systems, drones, data processing and analysis etc. India is also exploring options for integrating AI into its space programme and in border security operations.[26]

There is an urgent need for a global framework for regulating AI, and the regulatory frameworks are in their early stages. India should not delay its involvement in the process. By engaging now, India can leverage the advantage of being one of the rule-setters. India can take part in shaping the rules in a way that aligns with its national interests, rather than seeking changes once the regulations are already in place, as India experienced with nuclear agreements.[27]

Recommendations for India

Institutional

  • The Triple-Helix Approach: The triple-helix approach that necessitates collaboration between academia-industry and the government can help in building a strong AI institutional framework. While academia carries out fundamental research and AI innovation, industry focuses on product development and marketing and the government provides regulatory oversight. This collaborative approach creates a vibrant innovation ecosystem, guaranteeing that advancements in AI adhere to legal and ethical norms.
  • Building an Institutional Structure: An institutional mechanism can be created that should focus on certifying AI-related tools to prevent the deployment of malfunctioning or unsafe AI systems and ensure that AI is used responsibly in critical sectors like defence. To verify the performances of AI systems, there can be testing and evaluation centres. The institution can also focus on building a comprehensive data repository since AI’s effectiveness relies on the quality and quantity of data used for training.

Policy Frameworks:

  • Developing a National AI Strategy for Defence: India must design a comprehensive AI strategy specifically aimed at enhancing defence capabilities. India must embrace AI for cyber security, intelligence sharing etc. Several instances of intelligence failure- from Kargil to Pulwama- have highlighted the need to upgrade the methods of how information is gathered, processed, and shared.
  • Ethics and Governance Frameworks: As the potential of AI in warfare becomes increasingly evident, and global efforts to regulate it gain momentum, India must align with these initiatives. India needs to develop robust ethical and governance guidelines that prioritise the responsible use of AI, ensuring human oversight in decision-making. These guidelines should also safeguard against the misuse of AI in combat, avoiding unintended consequences, while resonating with International Humanitarian laws.

International Cooperation

India should be an active member of global AI research and multilateral forums, collaborating with other nations to shape global norms on AI. Through both bilateral cooperation and multilateral forums, India should focus on fostering joint research and development, adopting best practices for the responsible use of AI.

Conclusion

The debate over whether AI is a blessing or a threat continues to divide scholars and global political observers. It has become clear that AI is no longer a theoretical concept in military strategy, but an active tool used on the battlefield. Regulating AI in warfare brings significant responsibilities for nations that harness its capabilities. Developers of AI systems must ensure the safety of their models and prevent their misuse for harmful purposes. Just as doctors follow strict ethical codes, there is an urgent need to establish similar ethical guidelines for AI developers and those who deploy these systems. By taking a proactive stance and becoming an active participant on the global stage, India can contribute to shaping AI governance in defence. This involvement will help protect India’s national interests while also positioning the country as a key leader in the responsible use of AI in military operations.

DISCLAIMER

The paper is author’s individual scholastic articulation and does not necessarily reflect the views of CENJOWS. The author certifies that the article is original in content, unpublished and it has not been submitted for publication/ web upload elsewhere and that the facts and figures quoted are duly referenced, as needed and are believed to be correct.

Endnotes
  1. Fraser, Callum. “AI’s Baptism by Fire in Ukraine and Gaza Offer Wider Lessons.” IISS, April 22, 2024. https://www.iiss.org/online-analysis/military-balance/2024/04/analysis-ais-baptism-by-fire-in-ukraine-and-gaza-offer-wider-lessons/.
  2. Dorn, Sara. “Why the Israel-Hamas War Death Toll Is Uncertain—1 Year after Start of War.” Forbes, October 6, 2024. https://www.forbes.com/sites/saradorn/2024/10/06/why-the-israel-hamas-war-death-toll-is-uncertain-1-year-after-start-of-war/.
  3. Pancevski, Bojan . “One Million Are Now Dead or Injured in the Russia-Ukraine War.” The Wall Street Journal, September 17, 2024. https://www.wsj.com/world/one-million-are-now-dead-or-injured-in-the-russia-ukraine-war-b09d04e5.
  4. Wolfgang, Ben. “Drones, Use of AI Offer Taste of 21st Century Conflict in Armenia-Azerbaijan Clash.” The Washington Times, November 15, 2020. https://www.washingtontimes.com/news/2020/nov/15/azerbaijan-uses-drones-ai-beat-back-armenia/.
  5. Henshall, Will. “U.S. Military Spending on AI Surges.” TIME, March 27, 2024. https://time.com/6961317/ai-artificial-intelligence-us-military-spending/.
  6. The Central People’s Government of the People’s Republic of China. “Notice of the State Council on the Issuance of the New Generation of Artificial Intelligence Development Plan.” www.gov.cn, July 8, 2017. https://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm.
  7. Gigova, Radina . “Who Putin Thinks Will Rule the World.” CNN, September 2, 2017. https://edition.cnn.com/2017/09/01/world/putin-artificial-intelligence-will-rule-world/index.html.
  8. Rashid, Adib Bin, Ashfakul Karim Kausik, Al Hassan, and Mehedy Hassan Bappy. “Artificial Intelligence in the Military: An Overview of the Capabilities, Applications, and Challenges.” International Journal of Intelligent Systems 2023 (November 6, 2023): 1–31. https://doi.org/10.1155/2023/8676366.
  9. Ibid.
  10. Ibid.
  11. McKernan, Bethan, and Harry Davies. “‘The Machine Did It Coldly’: Israel Used AI to Identify 37,000 Hamas Targets.” The Guardian, April 3, 2024, sec. World news. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes.
  12. Ibid.
  13. Abraham, Yuval. “‘Lavender’: The AI Machine Directing Israel’s Bombing Spree in Gaza.” +972 Magazine, April 3, 2024. https://www.972mag.com/lavender-ai-israeli-army-gaza/.
  14. Khelil, Khaldoun . “AI and Israel’s Dystopian Promise of War without Responsibility – CIP.” CIP, April 9, 2024. https://internationalpolicy.org/publications/ai-and-israels-dystopian-promise-of-war-without-responsibility/.
  15. Gaestel, Allyn. “From Gaza to Ukraine, AI Is Transforming War.” Inkstick, March 6, 2024. https://inkstickmedia.com/from-gaza-to-ukraine-ai-is-transforming-war/.
  16. Bendett, Samuel. “Roles and Implications of AI in the Russian-Ukrainian Conflict.” Centre for a New American Security, July 20, 2023. https://www.cnas.org/publications/commentary/roles-and-implications-of-ai-in-the-russian-ukrainian-conflict.
  17. Gaestel, Allyn. “From Gaza to Ukraine, AI Is Transforming War.” Inkstick, March 6, 2024. https://inkstickmedia.com/from-gaza-to-ukraine-ai-is-transforming-war/.
  18. Goncharuk, Vitaliy . “Russia’s War in Ukraine: Artificial Intelligence in Defence of Ukraine – ICDS.” ICDS, September 27, 2024. https://icds.ee/en/russias-war-in-ukraine-artificial-intelligence-in-defence-of-ukraine/#_edn9.
  19. Ti, Ronald, and Christopher Kinsey. “Lessons from the Russo-Ukrainian Conflict: The Primacy of Logistics over Strategy.” Defence Studies 23, no. 3 (July 3, 2023): 381–98. https://doi.org/10.1080/14702436.2023.2238613.Flores, Renato G. “Ethics of Emerging Technologies on the Battlefield.” Observer Research Foundation Online, July 12, 2023. https://www.orfonline.org/expert-speak/ethics-of-emerging-technologies-on-the-battlefield.
  20. Schubert, Hartwig von . “Addressing Ethical Questions of Modern AI Warfare.” www.ips-journal.eu, 2023. https://www.ips-journal.eu/topics/foreign-and-security-policy/addressing-ethical-questions-of-modern-ai-warfare-6587/.
  21. Bergengruen, Vera. “How Tech Giants Turned Ukraine into an AI War Lab.” TIME, February 8, 2024. https://time.com/6691662/ai-ukraine-war-palantir/.
  22. US Department of State. “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy – United States Department of State.” United States Department of State, February 16, 2023. https://www.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy/.
  23. TOI World Desk. “China Refuses to Sign Agreement to Ban AI from Controlling Nuclear Weapons.” The Times of India. Times Of India, September 10, 2024. https://timesofindia.indiatimes.com/world/china/china-refuses-to-sign-agreement-to-ban-ai-from-controlling-nuclear-weapons/articleshow/113235217.cms.
  24. Chaturvedi, Pranjal. “REAIM Summit 2024: Forging Global Consensus on Military AI.” Firstpost, September 15, 2024. https://www.firstpost.com/opinion/reaim-summit-2024-forging-global-consensus-on-military-ai-13815530.html.
  25. Press Information Bureau. “Over the Last Three Years, ISRO Has Been Steadily Leveraging Artificial Intelligence and Machine Learning in Space Domain, Says Union Minister Dr Jitendra Singh.” Pib.gov.in, 2023. https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1986390.
  26. Mohan, C. Raja. “Expert Explains: What Is the Responsible Use of Artificial Intelligence in War; Where India, US and China Stand.” The Indian Express, September 8, 2024. https://indianexpress.com/article/explained/reaim-summit-ai-war-weapons-9556525/.
Share the Post:

LATEST ARTICLES

About Author