The use of Artificial Intelligence in warfare

This issue was recently discussed at a meeting of members of the Just Algorithms Action Group.

In recent days we have read reports of AI being used in the on-going conflict in Gaza. It is reported that Israel used an AI-powered database called ‘Lavender’ to identify human targets and another one called “the Gospel’ to identify buildings and structures. An experimental facial recognition technology known as ‘Red Wolf ‘ has also been used to track Palestinians and determine restrictions on their freedom of movement.

The Guardian on 3 April 2024 noted: "Israel’s use of powerful AI systems in its war on Hamas has entered uncharted territory for advanced warfare, raising a host of legal and moral questions (…)” and described the very broad criteria that the AI had been given to select targets, allowing for large numbers of civilians to be killed in the process. The machine was able to generate a vast number of targets in a short period of time with very little human intervention, almost automating the attacks.

Allocating the task of killing people to an automated system offers a route to remove the burden of human complicity and relieve the consciences of those setting the rules. Victims of this system, many of them innocent bystanders, are reduced to something like avatars rather than human beings.

But what about the workers who design these products? What agency do they have? Can they stop this?

A former Google whistleblower has written about his experience. William Fitzgerald played a role in the cancellation of Project Maven, a Google/US military contract to develop AI for military drones. He says that Google is very different from what it was a few years ago; it tightened its rules for its employees’ involvement in politics, allowing it to fire more than 50 employees recently for ‘disruptive activity’. They were fired for asking for transparency on Project Nimbus, a joint Google/Amazon contract to develop cloud technology for Israel’s government and military. The campaign was led by No Tech For Apartheid, a US based tech worker movement.

All this gives hope for the future, but at what price? Fitzgerald ends his article with this analysis: “A document that clearly demonstrates Silicon Valley’s direct complicity in the assault on Gaza could be the spark. Until then, rest assured that tech companies will continue to make as much money as possible developing the deadliest weapons imaginable.”

The use of AI in the conflict in Gaza is just one example of a wider problem: there is a global trend towards the increased application of military AI and other advanced technologies in conflict, including reports of AI being used in Ukraine. AI technologies can be used for target manipulation by weapons systems.

In addition, Facebook’s algorithms have been used to create hate speech during Ethiopia’s Tigray civil war, and voice cloning has been used in Sudan’s civil war.

All of these uses of AI directly contribute to violence.

The widely reported use of military AI in Gaza  escalates this trend, giving rise to urgent questions for the UK Government.

JAAG recognises that these developments are extremely disturbing. As a Quaker-led organisation our primary desire is for peace.  In particular, we urge the new UK government to do what it can both on the international scene and in controlling the UK defence industry to stop civilian populations, and infrastructure, being used as testing grounds for unregulated technology development with lethal effect.

If you are interested to find out more, you may be interested in the UK Campaign to Stop Killer Robots and their petition, which is jointly run with Amnesty International.

Previous
Previous

BMA wants tighter controls on use of AI in healthcare

Next
Next

The worrying ideologies behind Big Tech - pt 2.