Powered by MOMENTUM MEDIA
defence connect logo

Powered by MOMENTUMMEDIA

Powered by MOMENTUMMEDIA

Op-Ed: Artificial intelligence and the future of command & control

future of command & control

Artificial intelligence (AI) is one of the key technologies that will reshape the nature of contemporary warfare and key force multiplier capabilities. First and foremost of these is command and control (C2) capabilities and technology, which will benefit immensely from the introduction of AI technology, providing Australia with a potentially game changing strategic edge, explains Paul Maddison, director of the UNSW Defence Research Institute.

Artificial intelligence (AI) is one of the key technologies that will reshape the nature of contemporary warfare and key force multiplier capabilities. First and foremost of these is command and control (C2) capabilities and technology, which will benefit immensely from the introduction of AI technology, providing Australia with a potentially game changing strategic edge, explains Paul Maddison, director of the UNSW Defence Research Institute.

Recently I was invited to speak at an Australian event focused on technologies that are shaping the future of military C2. I began by admitting that I am neither a computer scientist nor an engineer. I was, however, a naval operator, and spent over 30 years immersed in applying new technologies aimed at increasing the probabilities of mission success at sea and ashore.

I began my talk by stating that the strategic imperative for armed forces will always be to fight and win the nation’s wars. For Australia, looking forward to mid-century, this will mean focusing on the Indo-Pacific, and preparing for a range of threats, most significantly those posed by the Chinese Communist Party and its armed service the People’s Liberation Army (PLA).

==============
==============

To buttress against the sustained rise of the PLA towards peer status with western democratic militaries, Australia will need to protect the contribution that the Australian Defence Force makes to the west’s competitive advantage or overmatch of the PLA. Australia will need to do this in concert with like-minded allies such as the Five Eyes partners, core NATO members, and key regional allies such as Japan.

Essentially, this is the basis for the Australian government’s current defence policy articulated earlier this year in the Defence Strategy Update 2020, the Force Structure Plan, and the “More, Together” Defence Science and Technology Strategy 2030.

There is consensus among senior leaders on what needs to be done to generate and sustain a future joint, interagency, expeditionary, distributed and multi-domain coalition-interoperable force that is able to sense and act from seabed to space, within cyber, cognitive and social domains, across the spectrum of operations, and from disaster response to general war.

The ADF will need to be able to manoeuvre in all domains in a way that is smarter and faster than any adversary, particularly the PLA.

To command armed forces effectively and confidently has always been a challenge, and it will get harder. It is a significant challenge to command well under relatively benign conditions, such as during a Rim of the Pacific (RIMPAC) exercise, on humanitarian and disaster relief operations, or in relatively uncontested theatres such as the first Gulf War in 1991 or East Timor in 1999.

It will be something altogether different to succeed in a fiercely contested environment where the adversary is disrupting military activities at every opportunity, aiming to put the ADF onto the back foot; employing everything from disinformation campaigns undermining public confidence, to denial of the electro-magnetic spectrum as well as using autonomous sensors and weapons to deliver lethal effects at an overwhelming pace.

If we are to believe the open source literature coming out of China, that is exactly what the PLA is striving to achieve. Through the concept of civil-military fusion, the harnessing of all instruments of national power for military purposes, the PLA is looking to accelerate disruptive applications of emerging technologies to grow its capabilities and improve its chances of success across the spectrum of operations.

These technologies include quantum computing, hypersonics, directed energy, multi-domain sensing, augmented human performance, trusted autonomous systems, human-AI teaming, swarm operations, and offensive space and cyber operations, to name a few.

The ethical, moral, legal or even philosophical questions with which experts are grappling around human-machine cognition, and what that might mean in terms of rules of engagement, authorised levels of force, and the accountability of human commanders is just as important. We must assume a future where adversaries will choose not to allow their lethality in operations to be constrained.

Instead, they will view any checks and balances imposed upon the ADF as a weakness to be ruthlessly exploited. War is, after all, a violent activity to compel an adversary.

What will the future battlespace look like? As the Chief of the Defence Force, General Angus Campbell, remarked in 2019, “if the ADF is going to a fight in 2025 it will do so with the force structure that is already in the field, in the air, and at sea today”.

What about in 2030? In a perfect defence procurement world, it will look like the force laid out earlier this year in the Force Structure Plan, enabled by the $270 billion the government has committed for new capabilities over the next 10 years, including a significant investment in defence research.

It follows that the ADF in 2030 will be enabled by improved sensors, greater lethality, trusted autonomy, and robust command and control. It will, however, still be a human in the loop paradigm.

What if we cast our view out further to mid-century? What will the Indo-Pacific operating environment look like in 2050?

If China’s intended strategic trajectory to the symbolically significant centennial of the People’s Republic in 2049 holds true, and the PLA has fielded disruptive competitive advantage capabilities, then it is safe to say that the joint operating environment will have significantly changed.

Contrary to the belief that Moore’s Law will decline, technological evidence today suggests that computing power will have increased perhaps a thousand times by 2035, and much more by mid-century. Even small, low-power wireless devices will likely have terabyte communications capabilities. Every sensor, every node, every dismounted soldier system across the network will share situational awareness AI, and deep learning combat and analysis support systems will be re-writing their own knowledge, learning, and adapting their own behaviours in real time based on their digital sense-making experiences.

The pace of operations will require a human-AI symbiosis that is imagined today as science fiction. The strategic leadership will need to have achieved a degree of human-machine trust that allows AIs to fight the battle when operational pace requires it. Commanders at the tactical, operational and strategic levels of war will need to be comfortable making the switch to “auto”, or in other words, to stepping out of the loop as humans and trusting the algorithms to get it done.

I recently heard this described as a “benevolent AI dictatorship” approach to C2. It is an apt metaphor.

As a former commander, I still find the idea of AI decision making very uncomfortable, kind of like driverless cars; perhaps those born today, who could find themselves in a fight 20 years from now, will be instinctively comfortable with their AI relationships.

Still, the idea of getting into a shooting war, where the machines are “weapons free” and the commanders are anxiously waiting for the next point when campaign time slows enough to allow a revised human assessment, and to reveal new decision options, is an idea I wrestle with.

That discomfort will increase the higher up the chain of command one goes, to the future equivalents of HQ Joint Operations Command and the whole-of-government National Security Committee chaired by the Prime Minister. A staggeringly high degree of trust will be required to allow machines or algorithms, again from seabed to space, to act autonomously with potentially grave implications, not least of which could be massive losses of life.

Perhaps I have got this all wrong. Maybe the answer will not be found by viewing the problem in sequential or binary human-AI terms where the human in the loop achieves maximum cognitive decision capacity in a mission, and then hand-balls over to the AI as it accelerates into the fight at warp speed. Instead, the answer may be found in a true human-machine symbiosis, where the humans and AIs tasked in command and control are trained together to work in harmony to co-ordinate their actions. In this way the ADF could achieve, and sustain, decision superiority by means of human-machine partnerships.

But what would that look like? I am not sure, but it would certainly have to account for the strengths and limitations of both the humans and the machines to optimise the assignment and sharing of tasks in the decision cycle. The Commander might retain key tasks that require expert judgement, for instance in resolving strategic policy ambiguity, while the AI could manage multi-domain sensor data collection, analysis, and weapons release.

Still, a lingering question for me remains. Will this path lead to a highly robust, distributed command and control capability? Or will we have descended into a non-understandable and uncontrollable chaos, a digital fog of war at the speed of light? If so, what can we do to avoid it?

The key I think is in making digital twin technology a cornerstone for research, experimentation, and learning around how to implement new command and control constructs across a rapidly evolving operating environment.

This environment will be challenging and will include seabed arrays, autonomous underwater distributed sensor systems that challenge the stealth of submarines, autonomous airborne sensor and weapons systems, long-range sensing and strike systems delivering destructive effects at Mach 10, sensors that detect incoming multi-axis hypersonic and directed energy threats, automatic threat prioritisation, target assignment and counter-measure launches; achieving all of this within a detect-to-engage sequence that is far beyond unaugmented human cognitive capacity.

Overhead edge-AI GPS-independent hardened satellite constellations will be providing sensing, targeting and assured communications support in a contested space commons, including in-orbit disruptive effects to degrade our space-based C2, matched by an ability for self-healing and autonomous re-tasking across the constellation.

Communications networks will be reconfiguring in response to enemy AI-driven attacks across the electromagnetic (EM) spectrum, and instantly launching anti-denial defences aimed at negating an adversary’s additional freedom of EM manoeuvre.

Adversary AI will be conducting deceptive communications, using digital decoys to disrupt our ability to build and sustain an accurate, joint multi-domain appreciation of ground truth across the battlespace. We will need superior and secure algorithms to instantly recognise and dismiss ghost targets and deception activities, to derive intelligence from our own AI support systems and initiate counter-deception actions.

The degree to which computer science will continue to disrupt the operating environment is eye-watering; however, we must be wary of only viewing C2 through a technological lens. C2 is fundamentally a human activity, and notwithstanding the rate at which the enabling technologies are evolving, it will remain people focused.

A new type of person will be required; one who is trained to partner with AI, understand AI, and who knows how to augment their own decision making with software supported analysis and action. There is a need to re-assess the desired core competencies of future military leaders, at all ranks and across all occupations, to ensure that their professional military education prepares them to be effective at partnering with AI to achieve mission success.

A sound foundation in data analytics, data visualisation, cyber security engineering, autonomy, machine learning, swarm tactics, and human-AI teaming will be essential if the ADF is to be successful at developing and implementing game-changing new approaches to agile C2.

Yet, it is human creativity and the ability to innovate that will remain a military leader’s strongest asset. No matter how powerful the human-machine symbiotic relationship becomes, the application of armed force will remain at its core a human activity.

These are complex and vital truths. All in the Australian defence ecosystem will need to work urgently, as a team. This team is the ADF, the Department of Defence, defence industry, academia where I have the privilege to serve, publicly funded research agencies, and international partners. It is in all of our national interests to do so.

Paul Maddison is the director of the UNSW Defence Research Institute, where his focus is on strengthening UNSW’s impressive defence research network in support of the Australian national interest. A recently naturalised Australian, Maddison was formerly Canada’s high commissioner to Australia, and formerly Commander of the Royal Canadian Navy.

You need to be a member to post comments. Become a member for free today!