logo

CENTRE FOR JOINT WARFARE STUDIES

An Autonomous Think Tank Promoting Integration and Jointness as a Synergistic Enabler of National Power, Providing Policy Alternatives Through Research and Debate
Latest Updates

CENTRE FOR JOINT WARFARE STUDIES

Lethal Autonomous Weapon System and International Concerns

For as long as humans have waged war, they have constantly taken up measures to develop and improve the technology while posing least amount of the risk to the operators. In the last decade, several technological advancements have brought forward revolutions in weapons technology. Our weapons technology has changed from gunpowder and landmines to the latest incorporation of Artificial Intelligence (AI) in warfare. The introduction of artificial intelligence (AI) in the context of modern warfare and counterterrorism operations has indeed challenged the traditional conception of security. One such pertinent change that is being researched and developed by the states is the removal of humans from modern warfare and increasing the autonomous characteristics of the weapon systems.

Consequently, in 2017, around 116 experts came together to warn about the potentiality of AI with an open letter forwarded to the United Nations (UN) warning that the adoption of Lethal Autonomous Weapon Systems (LAWS) will “permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”[1] But this warning thesis has become a point of reality today.

This advancement in technologies especially with the potential development of the LAWS, also known as “killer robots” has altered modern warfare, in its totality. Unlike the conventional weapons system, this technology has the potential to navigate on their own and select target based on a pre-set conditions and algorithms. States and other actors have taken note of this technological progress, and the world stands at a tumultuous position before the advanced versions of these technologies become ubiquitous.[2]

The paper will be divided into six sections. The first part of the paper will provide an overview of the concept of LAWS. The second part of the paper discusses the issues and the risk factors that revolve around the use of killer robots. The third section briefly looks into the development and stance taken by different states on LAWS. The fourth section will provide an understanding of the ethical issues regarding the adoption of LAWS in modern warfare. The fifth section will deal with the adaptability, and feasibility of international law in governing the use of LAWS. The last segment of the paper will be suggestions, and conclusive comments.

I. Understanding LAWS

Perhaps the best-known usage of weapons in warfare and other operations always have had humans-in/on-the-loop to authorise or initiate the launch operations. Many of the drones or robot weapons that are deployed around the world have a remotely controlled base from where the operations commands are considered. The introduction of the new class of autonomous weapons changes the existing mechanism as the question arises of who calls to pull the trigger.

A distinction should be made in understanding the term “autonomous” and “automated”. The terms “automated is used when there is a “man-on-the-loop” operating the existing technologies. On the contrary, the term “autonomous can be understood by looking at the Merriam-Webster Dictionary defines it as “undertaken or carried on without outside control”.[3] That is, the principle advocated here is “man-out-of-the-loop”. In general terms, any system or machine that can act without human intervention can be said to possess some degree of autonomy.

LAWS are “robot weapons that once launched select and engage targets without human intervention”.[4]  In other words, LAWS are designed to have the capability to make decisions involving the use of lethal forces independently. This has been made possible with the emergence of technologies that contribute to image processes, and target identification with a wide range of offensive and defensive capabilities in the operations. With better sensors and technologies, the Observe, Orient, Decide, Act (OODA) cycle which used to be the guiding principle for human decision making has reduced and is now passed on to AI.[5] This trigger of self-initiation for the target mainly depends on the algorithm that has been set to match the “target profile”. This indicates the lack of an operator who can anticipate and limit the effects of the measures taken during the operations.

Plainly put, LAWS can initiate lethal force on targets by quick assessment based on pre-set algorithms and more objectivity removing the fallible human actor’s decision-making contingencies. LAWS can operate on land, air, water, underwater and even in space and since machines have the capability to operate independently, there is a higher possibility of innocent human lives lost. By considering these issues, the LAWS technology will consequentially alter the fundamental dynamics of warfare and other military operations where humans will yield power to machines with the use of lethal forces.[6]

II. Potential security risks of LAWS

The possibility of reduced casualty and accuracy in targeting often come forward as supporting arguments in developing this technology. Many states are siding towards the adoption of this technology as a matter of their strategic military modernization plans. But this raises questions based on proliferations, miscalculations, and faulty attacks.

  • Mass Destruction – The existing autonomous weapons system is scalable, up to a point. But the release of LAWS depends on the number of individuals in the area, and not on the operators that operate the machine. A swarm of killer bots, irrespective of the size in a particular area, has the potential to cause wide-scale casualties and it does require only a single individual to initiate the programming of all the swarm LAWS. Now, comparing this to conventional weapons scale, having multiple guns and artillery would not suffice until and unless adequate manpower is deployed on the ground.[7] The deployment of LAWS provides a lack of casualty on one side and together with the looming threat of advancements in technology and proliferation brings forward the potential characterisation of LAWS as a “weapon of mass destruction”.
  • Algorithmic decision-making & Distinction – LAWS engage according to the specific “target profile” with the help of the pre-set algorithms and intelligence. Even after being accurately precise, at times, due to faulty intelligence can result in the death of an innocent individual.[8] Machines work solely based on the set algorithm and the information received. The identification and distinction of legitimate target comes across as key point in the development of LAWS. The task of differentiating a journalist with a camera to an enemy with weapons (or even possibility of enemy planning to surrender at which case the need for usage of lethal force is unjustified) is more challenging than expected.[9] This will be a destabilising factor on both national and international level as there are possibilities to conflict escalation.

  • Proliferation – It becomes important to note that this technology would not always remain under state militaries across the world. This leads to the possibility of adoption of this technology especially by the Violent Non-State Actors (VNAs), criminals and rogue states. The acquiring of these weapons and the causality impact can be considered out of proportion.[10] Further on, the added potentiality of swarm technologies materialising is fatal for the human mankind. It would be worthwhile to understand the best measures that can be employed to control the proliferation of this technology to VNAs than thereafter.

III. Developments of LAWS across the world

It has become imperative for the states to push for the adoption of LAWS as a matter of offensive, and for some states, as a defensive military capability. In most parts of the system, mainly due to security dilemma, states are in constant push to modernise their existing technology.[11] The development of LAWS has already been underway in states such as the United States, China, Iran, Israel, Russia, South Korea, France among others.[12] These countries are also selling the weapons technology to other countries.

Growing number of states such as New Zealand, Norway, Germany, Netherlands, Pakistan and the 55 countries of the African Union has come forward in favour of a ban on this technology.[13] States such as Austria, Chile and Brazil have pushed for proposed negotiations to ensure meaningful human control (MHC) over the functioning of this weapons system which would reduce the possibility for collateral damage.[14]

Yet, greater powers such as the US and Russia are parallelly blocking the progress towards regulation as they are heavily investing in the development of land, air, and sea-based LAWS. However, on the other side, although China initially supported the need for legally binding protocol, in 2016, the Chinese government expressed concern over the capability of lack of distinction, accountability, and proportionality.[15] Most of the European countries are taking a toned-down approach focusing on governance measures. The European Parliament in 2020, has urged the European Union (EU) to establish a “global framework” on the military use of AI and pushed for banning of LAWS internationally.[16] The parliament also noted that applicability of LAWS should be considered as method of last resort and can only be subjected to be lawful in action if there is human control on the machine.  

Countries that are at the forefront for the development of LAWS such as the US also claim that the existing international law will be able to govern the technology. It also becomes important to look at India’s position on LAWS. Top research labs such as the Defense Research and Development Organisation (DRDO) to that of the Indian Armed Forces are focusing on the implementation of these AI-related weapons technology into their capabilities.

Arms race behaviour is an important concern for the great power security politics. States try to build up on capabilities to not fall behind. But the similar race in autonomy possesses catastrophic risks as increased intelligence and decision-making capabilities would be considered well before analysing the possible abilities and limitations..[17]

a) Case study – Libya.

According to a recent report which was published by the United Nations Security Council (UNSC), LAWS may have targeted human beings for the first time in March 2020 at a military conflict in Libya.[18] This is said to be the advent of a major change brought forward on the arms race development. Since then, there has been numerous reports on the usage of swarm technology and autonomous weapons system used around the world.[19]

The UN report S/2021/229 put out a detailed review of the attack conducted by Libya’s Government of National Accord with the help of Turkish made STM-Kargu-2 drones against the Libyan National Army forces.[20] The STM Katgu-2 (Kargu translates as “hawk” in Turkish) is a 7 kg flying quadcopter, that has the capability of fully autonomous and manual targeting and can remain operational even in cases when the GPS and radio links are facing technical difficulties. It is also equipped with facial recognition software, real time target detection, ability to autonomously fire-and-forget through the entry of target coordinates and conduct swarm attacks.[21]

An excerpt from the UN published report notes: “Logistics convoys and retreating HAF [Haftar Affiliated Forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 ……… Once in retreat, they [HAF] were subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems.”[22] Although the UN report notifies about the autonomously fire-and-forget system of Kargu-2, Turkish authorities have denied the scenario. But the claim of Kargu-2 being capable of autonomous attack remains plausible on its face.[23]

IV. Ethical Concerns

The use of LAWS has come forward to become the central question relevant in ethical debates. The previous repercussions of war time have led to the adoption of various treaties, conventions and institutions which would cater to restricting the cruel and excessive acts of violence.  

The use of lethal autonomous weapons can reduce casualty to combatants and bring in more risk to the civilians. One of the significant questions that arise is, how can machines in decide matter of life and death. But the act of taking life with the complete autonomy of decision-making relying on machines, does not uphold the ethical and moral standards of the society. Humans, however imperfect, are expected to abide by providing justification for the killings and with the establishment of several international institutions, most societies punish individuals who fail to comply with the rules. Individual’s lives have intrinsic value and the act of machines targeting individuals violate the fundamental dignity as machines are not moral actors.[24] It becomes compelling to have humans-in-loop to ensure that the actions taken will.

Furthermore, the political cost of going to war added on with lack of casualty, in its totality, reduces the threshold of going to war.[25] This could lead to regional or global instability and elevated risk of conflict escalation. Subsequently, the lack of accountability would push the leaders of the state or the military to authorize the targeted attack, with absence of worry on the consequences of the attack.

Christof Heyns, human rights legal scholar, pointed out that “Decisions over life and death in armed conflict may require compassion and intuition”.[26] In a possible scenario of the technology developing, there is increased risk of dehumanizing the attack carried out. The ICRC is pushing for a parallel requirement for having “basic”, “appropriate” or “meaningful” minimum level of human control that would maintain the human control over these weapons and uphold the moral responsibility in the use of force.[27]

V. Legal aspects on LAWS

Given the likelihood scenario, where states in the future will deploy increasingly lethal autonomous weapons, it becomes important for the policymakers to understand and weigh the potential risks of these weapons. Although the possibility of miscalculations and unintended escalation of conflict are causes of concern, there also exists wide range of legal, ethical, and moral concerns arising from the lack of humans in these life-and-death decision-making capabilities.

As per the 1949 Geneva Convention, which put the requirement for the usage of weapons in the international treaties mandating the use of force based on the principle of distinction, proportionality, and military necessity.[28]

Of particular concern is the applicability of the principle of proportionality and distinction in LAWS technologies. The principle of distinction requires the in-conflict parties to distinguish between military personnel and civilians and spare the latter from as much as damage or violence inflicted as possible.[29] Critics of LAWS often points out if a machine equipped with algorithms will ever be able to make distinctions between targets. While the principle of proportionality stands to ensure that only necessary force required to complete the objective minimizing all possible means of collateral damage. The operation, action and behaviour of the LAWS should be guided notably by the principles such as distinction, proportionality and precaution as prescribed in the IHL, and if not, should be considered unlawful.

LAWS are not completely regulated under the IHL Although the existing IHL is flexible enough to cover LAWS, these are significant areas of weaponry blurs and added on the distinction on human-machine decision making.[30] Subsequently, looking from maritime area there is lack of conformity on the adaptability of United Nations Convention on the Law of the Sea (UNCLOS) and law of armed conflict (LOAC) with the LAWS technology.[31]

Furthermore, regardless of the wide range of proposed regulatory measures in dealing with LAWS and several of the international treaties and laws, there has been no international consensus on the issue. This lack of a codified document possesses real risks which rises security dilemma in case, where states go ahead with the deployment of LAWS in untested and unsafe weapons technology.[32] Added on, there exists no definition of LAWS in international law and the whole concept of (lethal or otherwise) autonomous weapons system does not exist. Since the usage of these weapons are not of far reality, and has already been used in 2020, this calls for a strong push to the international community to formulate a strong definition on LAWS. The existing framework in international law pertaining to war crimes focuses on human actions (or the decisions taken by humans) and since in LAWS the decision are taken by the machine, there are no command-and-control implications. This brings major void in the applicability of several of the international law.

a)  Question of accountability & Meaningful Human Control

At this point comes the point of introducing LAWS with MHC and how this adapted to the existing legal frameworks questioning the accountability gaps.[33] LAWS can substitute human soldiers, but they cannot be questioned for the war crimes committed nor the clause of accountability will be applicable. In a particular mission, robot soldiers and several drones are deployed on the scene, if these machines commit war crimes, killing several non-combatants and causalities rising way beyond proportion, comes the question of accountability.

At the 2014 Convention on CCW meeting held on LAWS, Germany pointed out that, Human control is the foundation of the entire international humanitarian law. It is based on the right to life, on the one hand, and on the right to dignity, on the other. Even in times of war, human beings cannot be made simple objects of machine action”.[34] But with the introduction of LAWS with “no-human-on-loop-technology”, makes it different from the remotely operated and conventional weapons. There can be issues of malfunctions within the systems, all of which would amount to serious war crimes. On a fully lethal autonomous weapons, the individual who operate only knows the general area, space, and time of target. The machine has more autonomy in finding the targets prescribed according to the algorithms and carry ahead the operations. This increasing autonomy in the weapons system raises the question of how much human involvement is required in lethal attacks.[35]

Complete autonomy prescription to machines would hinder “responsibility” ascription in case of war crimes. Even if individuals are brought for questioning, several other layers unfold on if the programmer, the manufacturer, the state military personnel who ordered the attack, the creator of algorithm or the field operator who sees the attack should be held responsible for the acts committed in violation of the IHL.

As the regulations are still not concrete for LAWS, there will be war crimes without war criminals. In other words, LAWS are abstract entities – machines with pre-set algorithms carrying out the objectives – and the Nuremberg Tribunal is laid down for men and not machines. It was with the Nuremberg Tribunal, that held individuals accountable for violation of the IHL for the first time, and since then, this principle has come forward to stand as an effective enforcement of the laws of the armed conflict.[36]

To comply with the IHL, it would require the machine to possess capabilities such as formulating judgments which are rational and appropriate. Furthermore, machines should be able to achieve situational awareness based on which decisions can be made on proportionality and distinction. On this note, there has been strong push to bring forward “meaningful human control”. This will also help in clearing up some of the legal and ethical concerns that has been brought up with respect to LAWS. Discussions at GGE on lethal autonomous weapons has consistently advocated for the adoption of MHC, although there is no internationally agreed definition about it.[37]

Under the 11 guiding principles under the Group of Governmental Experts (GGE), the focus is on human-machine augmentation and broad recommendations on human responsibility.[38] The human that is tasked with launching the weapon needs to make sure that his or her actions comply with the IHL. All possible measures need to be adopted to reduce the potential collateral damage and there is need to trace the operations under LAWS to a human agent.[39] This will make individuals under the control of the machine’s operations take actions considering ethics and morality.

VI. Suggestions and Recommendations

To avert the legal, moral, and ethical risks that are posed by the LAWS, it becomes essential to suggest and recommend the further course of action.

For over eight years talks pushing for an all-out ban on lethal autonomous weapons have been going on around the world. But none of it has yet been mutually agreed upon by all states. Technology is progressing expeditiously and has overtaken the deliberation and negotiations happening across the world. With the attack conducted on behalf of the Libya’s Government of National Accord in 2020, it can be rightfully noted that, the window for pre-emption of this technology has passed. States need to come together and formulate a plan on creating policies and new laws that would understand the catastrophic potential of LAWS in war. Correspondingly, this calls for treaty that specifies on the regulations regarding the deployment, structure, development, research, transfer, and the use of LAWS in modern battlefield. Common consensus should emerge on the application and operational capabilities which adhere to the ethical, moral, and socio-economic characters.

The pursuance of an all-out ban on this technology would be the ultimate goal. But since states are on the move to acquire this technology due to the fear of being out of the arms race and potential scenario of security dilemma; suitable idea would be a restrictive development of LAWS, which would stand in strict compliance with the international law.[40] More importantly, the newly formulated legislations should build to fill in the gaps that are already present in the existing international law. All the states should comply in understanding the newly defined IHL rules that would discuss in depth about the standard, framework and best practices that can be adopted for LAWS.

To safeguard human dignity and individual accountability, effective meaningful human supervision is mandated for timely interventions.[41] The moral and legal agents in military operations are humans and not machines. So, it becomes critical to have MHC that can be incorporated into these technologies, and this will enable in having effective control on the operational framework. Consequently, this will lead to the conceptualisation of LAWS through the façade of human-machine interaction, will helps in understanding “autonomy” as an ongoing collaborative mechanism between the soldier, commander and the machine or computer.

Thus, human operator that initiates or activates the LAWS will have an obligation to ensure that the system would operate following the best possible ethical and legal framework. This requires the individuals involved to receive adequate training and structured plan of action to be carry out missions in consideration to the best ethical practices.[42] The operator must have effective control of the system and should have reasonable certainty of the repercussion of LAWS during a mission.

Protocols should be developed to shift from “full autonomy” to how the autonomy can be meaningfully transformed with human control.[43] It is also recommended that states strongly consider the human control criteria as an important area of study when considering the research, acquisition, and development of lethal autonomous weapons.[44]

Furthermore, since there is a high risk of proliferation of this technology, there need to be a solid system that regulate and prevent the illicit usage from VNAs and rogue states of this technology. Restricting the access to terrorist groups needs extra attention as these autonomous weapons are simple enough to be made. To reduce the broader risks, the sale of LAWS needs to be bound under strict unbendable measures to cut down possible chances of obtaining the weapons.

VII. Conclusion

Technology has shaped the nature of warfare over the years. We cannot limit the potential of technology just as an instrument of war, but rather it shapes, influences, and produces new frames of outcome in the battlespace.[i] The introduction of LAWS, on similar lines, is bringing forward new dimensions in the theatre of operation to engage in violence. Although the technological innovation is intended to create better operability in warfare, LAWS are something that rather needs to be feared. Several states, institutions and think tanks have strongly opposed the introduction of LAWS. But, as of today, there has been no mutual consensus, treaty, or an agreement on the further course of action to understand the potentialities of LAWS.

One of the central consequence of LAWS is the “no-humans-on-loop” once the machine has been initiated. Compared to conventional weapons, the autonomy of decision-making process lies more with the machine, and this challenges several of the ethical and legal responsibilities while carrying out an operation. The response in advance to this is having adequate meaningful human control on the system.

Ultimately as technology progresses, the conduct of war undergoes significant changes. It becomes obligatory to point out that, merely limiting IHL to individual conduct of war has gone behind. The rise in technology also demands to push for changes in the international law, to ensure legitimate conditions during conflict engagements. Although, global governance measures have continuously pushed for banning and limiting the proliferation of this technology, the trajectory of growth shows that, LAWS are here to stay.

The entry of LAWS in battlefield to be used as a weapon also shapes the world, we wish to live in. Raising the ethical, legal, and moral standards in understanding modern warfare and pushing for adoption of changes with respect to LAWS has become necessary as the inevitability of these technologies can otherwise mislead to a severe potentially catastrophic situation. LAWS are changing the political landscape. The possibility of making conduct of wars easier should be feared. The possibility is limitless, and the repercussions would be catastrophic. The international community should begin structuralising plans on formation of treaties related to the regulated use of LAWS to ensure standard ethical code of conduct during warfare.

DISCLAIMER

The paper is author’s individual scholastic articulation and does not necessarily reflect the views of CENJOWS. The author certifies that the article is original in content, unpublished and it has not been submitted for publication/ web upload elsewhere and that the facts and figures quoted are duly referenced, as needed and are believed to be correct.

References
  1. Kylie Hierbert, “Are Lethal Autonomous Weapons Inevitable? It Appears So.” Center for International Governance Innovation, 27 January, 2022.https://www.cigionline.org/articles/are-lethal-autonomous-weapons-inevitable-it-appears-so/
  2. Robert F Trager and Luca M Laura, “Killer Robots Are Here-and We Need to Regulate Them.”, Foreign Policy. 11 May 2022. https://foreignpolicy.com/2022/05/11/killer-robots-lethal-autonomous-weapons-systems-ukraine-libya-regulation/
  3. Merriam-Webster Online, “Autonomy Definition & Meaning,” accessed 01 July 2022,  https://www.merriam-webster.com/dictionary/autonomous
  4. Fillipo Sio  and Jeroen van den Hoven, “Meaningful Human Control over Autonomous Systems: A Philosophical Account.”, Frontiers, 28 February 2018. https://www.frontiersin.org/articles/10.3389/frobt.2018.00015/full
  5. Atul Pant, “Automation of Kill Cycle: On the Verge of Conceptual Changeover.” Manohar Parrikar Institute for Defence Studies and Analyses. 24 January 2020. https://idsa.in/idsacomments/automation-weapons-apant-240120
  6. Future of Life Institute, (2022), “10 Reasons why Autonomous Weapons must be stopped”, 27 June, URL https://futureoflife.org/2021/11/27/10-reasons-why-autonomous-weapons-must-be-stopped/

  7. Lethal AWS (2022), “What are lethal autonomous weapons?” 26 June, URL: https://autonomousweapons.org/
  8. Nathan Leys, “Autonomous Weapons System and International Crises,” Stratergic Studies Quarterly 12, No 1 (2018): 48-73.
  9. Rudischhauser, Wolfgang. “Autonomous or Semi-Autonomous Weapon System: A Potential New Threat of Terrorism?,” Federal Academy for Security Policy, (2017): 1-4

  10. Evans, Hayley and Natalie Salmanowitz (2019). “Lethal Autonomous Weapons Systems: Recent Developments”, Lawfare, Accessed 7 March 2019, URL: https://www.lawfareblog.com/lethal-autonomous-weapons-systems-recent-developments
  11. Dumouchel, Paul. “Lethal Autonomous Weapons Systems: Organizational and Political Consequences,” The Philosophical Journal of Conflict and Violence 5, No 1 (2021): 95-107
  12. Jeremy Kahn, “The world just blew a ‘historic opportunity’ to stop killer robots- and that might be a good thing”, Fortune, 22 December 2021. https://fortune.com/2021/12/22/killer-robots-ban-fails-un-artificial-intelligence-laws/
  13. Human Rights Watch, “Killer Robots: Growing Support for a Ban”, 10 August 2020. https://www.hrw.org/news/2020/08/10/killer-robots-growing-support-ban
  14. Mohanty, Bedavyasa. (2017), “Lethal Autonomous Dragon: China’s approach to artificial intelligence weapons”, Observer Research Foundation, [Online:web] Accessed 15 November 2017, URL: https://www.orfonline.org/expert-speak/lethal-autonomous-weapons-dragon-china-approach-artificial-intelligence/
  15. Brooks Tigner, “No ‘human-out-of-the-loop’ for autonomous weapons, says new European Parliament report”, Janes, 11 December 2020. https://www.janes.com/defence-news/news-detail/no-human-out-of-the-loop-for-autonomous-weapons-says-new-european-parliament-report
  16. Micheal T Klare,. “Autonomous Weapons Systems and the Laws of War.”  Arms Control Today 49, no.2 (2019): 6-12.
  17. Joe Hernandez, “A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says”, NPR, 1 June 2021. https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d
  18. Lethal AWS (2022), “What are lethal autonomous weapons?” 26 June, URL: https://autonomousweapons.org/
  19. United Nations Security Council. Document S/2021/229. New York, NY: UN Headquarters, 2021. https://documents-dds-ny.un.org/doc/UNDOC/GEN/N21/037/72/PDF/N2103772.pdf?OpenElement
  20. Joseph Trevithick, “Turkey Now Has Swarming Suicide Drones It Could Export,” The Warzone, 18 June 2020. https://www.thedrive.com/the-war-zone/34204/turkey-now-has-a-swarming-quadcopter-suicide-drone-that-it-could-export
  21. United Nations Security Council. Document S/2021/229. New York, NY: UN Headquarters, 2021. https://documents-dds-ny.un.org/doc/UNDOC/GEN/N21/037/72/PDF/N2103772.pdf?OpenElement

  22. Zachary Kallenborn, “Applying arms-control frameworks to autonomous weapons”, Brookings, 5 October 2021. https://www.brookings.edu/techstream/applying-arms-control-frameworks-to-autonomous-weapons/
  23. Micheal Horowitz, “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons.” Daedalus 145, No.4 (2016): 25-36.
  24. Peter Asaro,  “On banning autonomous weapons systems: human rights, automation, and the dehumanization of lethal decision-making.” International Review of the Red Cross 94, No. 886 (2012): 687-709
  25. Horowitz, “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons.”, 8.
  26. Arms Control Today. “Document: Ethics and Autonomous Weapons Systems: An Ethical Basis for Human Control?”, July/August 2018. https://www.armscontrol.org/act/2018-07/features/document-ethics-autonomous-weapon-systems-ethical-basis-human-control
  27. Asaro, Peter. “On banning autonomous weapons systems: human rights, automation, and the dehumanization of lethal decision-making,” 688.
  28. Klare, “Autonomous Weapons Systems and the Laws of War,” 11.

  29. Milad Emamiam, “Autonomous Weapons Under International Humanitarian Law.”  The Regulatory Review. 16 September 2020. https://www.theregreview.org/2020/09/16/emamian-autonomous-weapons-international-humanitarian-law/
  30. Singh, Abhimanyu, “Lethal Autonomous Weapons Systems and the Legal Regime.” Manohar Parrikar Institute for Defence Studies and Analyses, April-June 2021. https://www.idsa.in/jds/lethal-autonomous-weapon-systems-15-2-2021 
  31. Arkin, Ronald et al. (2019), “A Path Towards Reasonable Autonomous Weapons Regulation”, IEEE Spectrum,  21 October, URL: https://spectrum.ieee.org/a-path-towards-reasonable-autonomous-weapons-regulation
  32. Daniel Lim, “Killer Robots and Human Dignity,” In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, (2019).171-176.
  33. Lim, “Killer Robots and Human Dignity,” 172.
  34. Paul Scharre and Kelley Sayler. “Autonomous Weapons & Human Control.” Center for a New American Studies, (2016): 4-5.
  35. Scharre and Sayler, “Autonomous Weapons & Human Control,” 43.
  36. Schwarz, Elke (2018), The (im)possibility of meaningful human control for lethal autonomous weapons system, Humanitarian Law & Policy, [Online:web] Accessed 29 August 2018, URL: https://blogs.icrc.org/law-and-policy/2018/08/29/im-possibility-meaningful-human-control-lethal-autonomous-weapon-systems/

  37. Amoroso, Daniele and Guglielmo Tamburrini, “Autonomous Weapons System and Meaningful Human Control: Ethical and Legal Issues,” Current Robotic Report, (August 2020):187-194. 
  38. Vincent Boulanin, Laura Brunn and Netta Goussac, “Autonomous Weapon Systems and International Humanitarian Law.” (June, 2021): 11.
  39. Singh, Abhimanyu, “Lethal Autonomous Weapons System and the Legal Regime,” Manohar Parrikar Institute for Defence Studies and Analyses. April-June, 2021. https://www.idsa.in/jds/lethal-autonomous-weapon-systems-15-2-2021
  40. ICRC (2022), International Committee of the Red Cross, “ICRC Position on Autonomous Weapon System.” 13 July, URL: ICRC position on autonomous weapon systems | ICRC
  41. Horowitz, Micheal. “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons,” 3-6.
  42. Vincent Boulanin and Maaike Verbruggen, “Mapping the Development of Autonomy in Weapon Systems,” SIPRI. (November 2017): 133-135.
  43. Vincent Boulanin et al, “Limits on Autonomy in Weapons Systems. Identifying Practical Elements of Human Control,” SIPRI, (June, 2020): 1-37.
Share the Post:

LATEST ARTICLES

About Author