Machine war: How AI and other technologies are shaping the fighting in Gaza

Joint-capabilities
|
By: David Hollingworth
Machine war: How AI and other technologies are shaping the fighting in Gaza

Artificial intelligence and facial recognition technologies are an essential part of the IDF’s targeting doctrine – here’s how it works, where it might break international law, and how Australia is looking to tackle the technology.

Artificial intelligence and facial recognition technologies are an essential part of the IDF’s targeting doctrine – here’s how it works, where it might break international law, and how Australia is looking to tackle the technology.

Battlefields have always driven technological innovation, but the modern battlefield is seeing the rapid development and deployment of whole suites of new technologies.

One, however, has both the capacity to deliver a powerful force multiplier to battlefield commanders and a cause for serious concern – artificial intelligence.

 
 

While autonomous systems such as drones are already a game changer, combat management platforms, powered by AI, are promising to give commanders ready access to vast amounts of data in order to aid in rapid decision making, but the technology already in use by many militaries is somewhat of a double-edged sword. A recent paper on the use of AI by Israeli forces in Gaza is illustrative of some of the benefits and challenges.

The Lavender

The Israeli Defence Force (IDF) is using several AI tools to assist in its prosecution of combat operations in Gaza, particularly when it comes to generating targets.

The IDF uses an AI-decision support system, also known as AI-DSS, referred to as “the Lavender”, to identify members of Hamas and other terrorist groups. This system is reported to rely on facial recognition technology or FRT to identify potential targets alongside a host of other data sources, including geospatial, human and open-source intelligence. The Lavender’s findings are passed on to intelligence analysts for review before being passed on to commanders in the field.

The IDF also uses a system called Where’s Daddy? to track the location of Hamas members, which reports back to the military when a suspected member enters their home, which will often lead to a strike being called in on the location, regardless of whether or not family members may also be present.

Many strike missions are also planned using another AI-powered tool, called Fire Factory. According to Bloomberg, this platform uses “data about military-approved targets to calculate munition loads, prioritise and assign thousands of targets to aircraft and drones, and propose a schedule” of operations. Another tool, the Gospel, tracks known locations and buildings where militants operate from.

All of these systems are utilised by Unit 8200, which is part of the Israeli Intelligence Corps.

In the first six months of the conflict, according to a paper published on 23 May, titled The Use of the ‘Lavender’ in Gaza and the Law of Targeting: AI-Decision Support Systems and Facial Recognition Technology, the Lavender, in conjunction with these other tools, generated somewhere in the field of 37,000 targets and is reported to have a 90 per cent success rate of positive identifications.

“At the time of writing, more than 40,000 Palestinians have been killed in Gaza since 7 October 2023, at least 92,401 Palestinians have been wounded, and more than half of Gaza’s buildings destroyed or damaged,” the paper’s author, Emelie Andersin, said.

“This scale of civilian casualties and damage to civilian infrastructure raises serious concern about the use of AI in military targeting decisions and its ability to mitigate civilian harm in battlefield targeting.”

The problem

While there’s no doubt that systems such as these can streamline military decision making, there are several problems inherent in the way these platforms develop data.

For one thing, there is a very human bias that tends to trust machine systems. If the Lavender says a target is in fact a known militant, analysts may be tempted to trust the system more than they probably should, leading to the development of poor intelligence that is, in turn, passed on to commanders in the field. At the same time, this intelligence is developed very quickly, which may put pressure on commanders to make decisions just as fast, again leading to poor results and unnecessary casualties.

Second, biases are very likely baked into these systems based on how the AI platforms have been trained. For instance, the skin colour and gender of individuals used in images to train AI-based systems can have a serious impact on the algorithms used in facial recognition.

“... assume a machine learning model is fed with videos and images of people of colour subject to sampling bias, because the developer believes that this group are likely to be terrorists due to racial prejudice,” Andersin said.

“During the training phase, the algorithms will be taught to disproportionately label that group as ‘valid’ lawful targets far more frequently than other groups.”

Another issue is that these systems are essentially black boxes that are rarely fully understood by those who use them. Knowing how a machine learning system generates its results is just as important as the results themselves, as then a user can have a better understanding of both its strengths and weaknesses.

But arguably, the greatest issue with such AI systems is in relation to international humanitarian law (IHL).

In IHL there are three important principles that shape the legality of targeting and AI systems are highly likely to lead to breaches of all of them. The principle of distinction refers to the requirement for militaries to distinguish between combatants and civilians and act accordingly. The principle of proportionality requires militaries to consider the possibility of collateral damage, either to civilians or civilian property and infrastructure, while the principle of precaution calls upon decision makers to take “constant care” to protect non-combatants.

In every case, the IDF’s use of FRT and AI-generated intelligence falls short of every one of these principles.

“Ensuring the lawful use of AI-DSS in armed conflicts requires thorough verification to confirm that recommended targets are both accurate and not protected from direct attack under IHL. There is a risk of over-emphasising the need for speedy decision making at the cost of harm to the civilian population due to inaccuracy,” Andersin said.

“To mitigate this risk and to comply with IHL obligations, it may be necessary to limit the role of AI-DSS to certain tasks related to the use of force, restrict its use in contexts with a high civilian presence, and slow down the military decision-making process.”

The Australian position

There are lessons to be learned from the IDF, though more in the area of how not to employ AI. This matters because the Australian Defence Force is looking at exactly this area.

The latest issue of the Australian Army Journal, an official publication of the Australian Army Research Centre is titled Operational AI Integration and Governance in the Australian Army and it tackles exactly how AI can be harnessed on the battlefield for the greatest advantage. AI, the journal argues, is a force multiplier, a safety feature and a decision-making advantage, alongside machine learning that can continually adapt to the modern battlefield.

But for all its advantages, the journal’s author, Benjamin J Wood – land autonomy policy liaison officer with the Robotics and Autonomous Systems Implementation and Coordination Office in Army HQ – is aware of the challenges.

“Despite the potential operational flexibility created by AI technology, many commentators nevertheless contend that various technical, organisational, institutional, and cultural limitations curtail the capacity of AI to revolutionise or even enhance current means of warfighting. Among the most prominent causes for cynicism is the tension between the desire of militaries to understand and predict the tools of warfare they command, and an incapacity of humans to easily explain the chain of ML-generated logic responsible for AI outputs,” Wood said.

“This tension is further exacerbated by modern AI models, the outputs of which may vary depending on contextual minutia or their own self-adjusted ML algorithms. The ability to train ML algorithms to produce reliable and predictable outputs for military applications depends on access to operational datasets of substantial size. Critics contend that such datasets ‘often do not exist in the military realm’. Further, sharing such outputs with industry will often be restricted by information security policies.”

So the challenges are there, but with a clear-eyed understanding of them, hopefully the ADF can avoid making the same mistakes – or making the same decisions – as the IDF.

Tags:
You need to be a member to post comments. Become a member for free today!