Moral and Ethical Concern of Autonomous Weapons

Weifan Zhou / 2024-05-08


Note: This is an assignment from my Cybertechnology Ethics (PHIL 222) class from Illinois Wesleyan University. The essay was finished in April 30, 2024.


In modern wars such as the Ukraine War, some advanced technologies like drones are used for killing enemies without soldiers going to the battlefield. Those drones have used artificial intelligence for targeting people, which raised people's concern about whether it is too cruel to use autonomous weapons in wars. In this paper, I am going to argue that developing autonomous weapons is morally wrong because it puts civilians under threats. To reach this goal, I am going to show the definition of autonomous weapons systems, illustrate ethical concerns of wars and such weapons, and discuss potential objections.

Before discussing potential issues of autonomous weapons, it is essential to define what the autonomous weapons system is. Asaro provides the definition of autonomous weapons system based on the International Committee of the Red Cross, that “an autonomous weapon system is any system that automates the critical functions of targeting and engaging a weapon. This means that the targeting and use of force must be automated for the system to be considered an autonomous weapon" (213). This definition highlights the key characteristic of autonomous weapons: the complete absence of human intervention in the critical functions of targeting and engaging targets. It distinctly separates autonomous weapons from traditional or semi-autonomous systems such as drones, which still require human control for critical decision-making tasks like selecting and confirming targets. The primary moral concern of autonomous weapons is that without human control, such weapons can harm civilians more easily. Asaro mentioned that “autonomous weapons could be easily designed, altered, or manipulated to purposely harm civilians… Despots and tyrants might turn such weapons against their own people or apply them to genocidal ends, or terrorists might use them to attack civilians'' (215). When using traditional weapons, which are more complicated to control, the decision makers (such as the president) cannot conduct attacks themselves but give this power to the army; it usually takes a long to give orders layer by layer to military forces, who can delay or even refuse to attack. But using autonomous weapons, which only require setting a target, the decision makers can operate weapons themselves immediately. This issue raises significant ethical demands not only concerning the autonomous weapons themselves but also regarding the morality of their operators. However, it is almost impossible to ensure every user of those weapons has such a high moral quality. Consider a scenario where an unethical individual owns such a weapon; even without engaging in large-scale warfare, they could use it to carry out targeted assassinations against someone they hate. The danger is compounded by the fact that these owners do not need military expertise or to manually select targets; the weapon can autonomously execute lethal actions at any time without direct human oversight.

Another concern of autonomous weapons is that if they are hacked, civilians and infrastructures may be attacked even if autonomous weapons do not plan to target civilians. Since autonomous weapons are using computer technologies, relying on algorithms and input information (such as location and facial images), if GPS system is interrupted or if the database is polluted during a cyberattack, an autonomous weapon may lose its accuracy in determining targets, but instead open fire to some places such as residential zones. Asaro listed a possibility, that “one could do this by attacking its sensors or simply manipulating what those sensors capture… These sensors respond to signals from GPS satellites in space and compute their location from the signals of multiple satellites” (221). The misleading target can lead to unexpected casualties in residential areas and damage in buildings, which leads to morality concern.

Some may still be optimistic about the development of autonomous weapons systems because if those weapons become too powerful, like nuclear bombs, people may stop using them, which can prevent wars and casualties. After World War II, when two autonomous bombs killed massive civilians in Japan, there was no more huge worldwide war because any country with nuclear bombs can just destroy their enemies easily. In some perspectives, the world becomes more cautious about wars, for instance, during Prague Spring, when the Soviet Union invaded Czechoslovakia, western allies such as UK and US did not support Czechoslovakia because the western world is afraid to upgrade conflicts into world war again. Instead, most of the hot wars became economic sanctions, cyberattack, arms race, space race, etc, which significantly reduced casualties. Similarly, there are some ideas of a scenario when autonomous weapons become stronger, causing huge casualties once or twice like nuclear bombs, and then people realize they should not use autonomous weapons anymore because it is too powerful. This claim argues that when autonomous weapons become as powerful as nuclear weapons, people no longer dare to use those weapons or initiate wars, and eventually those advanced autonomous weapons bring peace and become beneficial. More civilians can survive due to the reduction of hot wars.

My response to this objection is, the condition of peace is based on defense systems not developed as quickly as weapons. One of the reasons that nuclear weapons are called powerful is that they can destroy almost any physical defense system, so that no one can avoid death. However, what if there is a new defense system that can provide a “shell” to the city and prevent citizens being killed by nuclear weapons? Think about this scenario: assume there are two countries who do not like each other, having both this new defense system and nuclear weapons. One day both of them planned to declare a war, and since both of them know each of them have a new defense system that can potentially prevent civilians being killed, they will use nuclear weapons to bomb each other without concern. They may assume that those nuclear weapons are not as threatful as in the past because each of them have a defense system to keep them safer, and thus they will resume wars and nuclear weapons again. However, there may be some accidents: what if the defense system does not work, or what if it is hacked, or because the defense system is so expensive that some regions are not protected under the defense system? All those possibilities can cause more casualties than traditional hot wars. Similarly, if there is a defense system for autonomous weapons that many countries have, when people restart autonomous weapon attacks against each other because all of them have the defense system, short peace will end, and in the worst case, there may be more casualties because people no longer keep alert with the harm of autonomous weapons systems, nor will they rethink the risk of the defense system not working. Thus, in the long run, autonomous weapons cannot decrease, or will even lead to more civilian deaths.

In conclusion, I think that developing autonomous weapons systems is unethical, especially for civilians because any immoral individuals who control the weapon, any hackers, or potential defense systems can put citizens under injury or death. In my perspective, rather than developing more powerful weapons, it is more important for people to realize the potential harm of them, no matter if they are powerful or not.


Work Cited #

Asaro, “Autonomous Weapons and the Ethics of Artificial Intelligence.”

Last modified on 2024-05-08