Daily Blog Wole

Balancing AI Innovation and Moral Responsibility in Modern Warfare

Artificial intelligence (AI ) continues to emerge as a transformative force in many areas of life, already beginning to revolutionize industries and change the way we live and work. The topic of AI in warfare will require increased attention from governments, policymakers and international organizations. This is largely due to significant advances in the development of autonomous weapons systems (AWS), which use algorithms to operate independently and without human control on the battlefield. More broadly, AI in its many forms has the potential to improve a wide range of military activities, from robotics and weaponry to intelligence gathering and decision-making.

This diversity of potential applications creates a unique set of ethical dilemmas. The benefits of AI in warfare include increased precision, reduced casualties, and even deterrence from engaging in armed conflict, most notably the threat of nuclear war. However, this would mean empowering machines to make conscious life-and-death decisions, blurring the lines of responsibility and possibly violating fundamental principles of morality in war.

A Brief Overview of AI in War

As the Stockholm International Peace Research Institute notes, AI has become an important part of military strategies and budgets, contributing to a broader “arms race”. Combined with nuclear and atomic threats, geopolitics must therefore question the ethics of the continued weaponization of technology. Some believe that these advances will eventually lead to zero-sum thinking dominating global politics. This logic is not new; Alfred Nobel hoped that the destructive power of dynamite would end all wars.

AI has already begun to be incorporated into military technologies, such as drone swarms, guided missiles, and logistics analysis. Autonomous systems have long been part of defensive weaponry, such as anti-vehicle and anti-personnel mines. Future developments will continue to strive for greater levels of autonomy. The United States is testing AI bots that can autonomously pilot a modified version of the F-16 fighter jet; Russia is testing autonomous tanks; and China is also developing its own AI-powered weapons.

The goal is to protect human life while continuing to mechanize and automate the battlefield. “I can easily imagine a future where drones vastly outnumber humans in the military,” said Douglas Shaw, a senior adviser to the Nuclear Threat Initiative. So instead of putting soldiers on the ground, we saved lives by putting them in planes and arming them with missiles. Now, thanks to AI, the military hopes to save even more lives from its forces.

The Moral Consequences of Using AI in War

So far, so good. Save lives by using AI to control drones. Save lives by using AI to launch missiles. The difference between this technological leap in warfare and past innovations is the lack of human input into the decision-making. With AWS and lethal autonomous weapons systems (LAWS), we are handing over the ability to kill a human to an algorithm that lacks the intuitive humanity of a human being.

A number of ethical, moral and legal issues arise here.

Is it fair that a human life should be taken in war without another human being on the other side of that action? Does the algorithm programmer in LAWS have the same responsibility to represent his country as a fighter pilot, and/or the same right to contribute to killing the enemy?

As with the ethical dilemmas surrounding autonomous vehicles, is it morally justifiable to delegate life-and-death decisions to AI-powered algorithms? From a technological perspective, this will depend in part on the transparency of AWS programming: the training, the datasets used, the encoded preferences, and the errors such as bias in these models. Even if we achieve adequate levels of accuracy and transparency, should AWS and LAWS be considered moral in war?

Moral Implications of Just War Theory

Just war theory, attributed to Saint Augustine and Thomas Aquinas in the 13th century, evaluates the morality of war and ethical decision-making in armed conflict. In the guiding principles of jus ad bellum (justice of war) and jus in bello (justice in war), the most important considerations are:

Proportionality: The use of force must be proportionate to the aim pursued and must not cause excessive harm or suffering compared to the benefits anticipated.

Discrimination: Also known as non-combatant immunity, this principle requires combatants to distinguish between combatants and non-combatants and to target only the former while minimizing harm to the latter.

It can be argued that the use of weapons based on artificial intelligence and the LAW does not guarantee compliance with these conventions.

Regarding proportionality, AI-enabled weapons will have the ability to apply force with greater speed, power, and precision than ever before. Will this level of force necessarily be proportionate to the threat/military objective posed, especially if it is used against a country with less technologically advanced weapons? Likewise, what if LAWS receives faulty information or hallucinates and makes an inaccurate prediction? This could lead to unnecessary military force being generated and applied, and disproportionate actions.

As for discrimination, these technologies are not 100% accurate. What happens if facial recognition technologies cannot distinguish civilians from combatants when launching a missile at enemy forces? This would undermine the moral distinction between legitimate military targets and innocent bystanders.

Topical research

A UN panel of experts reported the possible use of the SAM system – the STM Kargu-2 – in Libya in 2020, deployed by the Turkish military against Haftar’s forces (HAF). Described as “programmed to attack targets without the need for data transfer between the operator and the munition”, the drones were eventually neutralized using electronic jamming. However, the use of this remote aerial technology changed the course of what had previously been a “low-intensity, low-tech conflict in which casualty prevention and force protection were priorities for both sides”.

Despite the significant casualties, it is unclear whether unmanned attack drones have caused any casualties. However, it highlights the problems associated with the unregulated use of unmanned combat aircraft and drones.

The HAF units were not trained to defend against this form of attack, had no protection against aerial attacks (which occurred despite the drones being offline), and continued to be harassed by LAWS even as they retreated. This alone begins to violate the principle of proportionality, and even more so when one considers that the Kargu-2 STM changed the dynamics of the conflict. Reports go so far as to suggest that “Turkey’s use of advanced military technology in the conflict was a decisive element in… the uneven war of attrition that led to the defeat of the air force in western Libya in 2020.”

International cooperation and regulation of the use of AI for military purposes

Since 2018, UN Secretary-General António Guterres has argued that LAWS are both politically and morally unacceptable. In his “New Agenda for Peace 2023,” Guterres called for this to be formalized and implemented by 2026. all other AWS.

This type of international cooperation and regulation will be necessary to help overcome the ethical issues we have discussed. At this point, using AWS without human oversight will pose the most immediate problems. The lack of a human decision maker creates liability issues. Without a chain of command, who will take responsibility for a malfunction or general fallibility of an AI-based system?

Moreover, it will lead to a lack of accountability. Especially in traditional wars, where there are certain moral principles, such as the theory of just war, there will be no culprit for the actions taken by autonomous systems.

Finally, while there are benefits to the increasing use of AI for military purposes, how these technologies are ultimately used will determine whether they become a utopian solution or an extension of an already politically destabilizing arms race.

Thus, the ongoing debate around an international, legally binding framework for accountability in AI wars may well be one of the most important areas of AI regulation in the near future.

Leave a Comment

Your email address will not be published. Required fields are marked *