An armed weapons system capable of making decisions sounds like it’s straight out of a Terminator movie. But once lethal autonomous weapons are out in the world, there could be no turning back.

“It’s very possible that we can’t put the genie back in the bottle with lethal autonomous weapon systems,” says Dr Michael Richardson, who was recently named a Top 5 Humanities Researcher by the ABC for his work on drone technologies.

Dr Richardson, who researches political violence and emerging technologies, says developments in lethal autonomous weapons are accelerating radically. He says these systems are not yet deployed operationally, but there are still several reasons why we should all be concerned.

“Most of the major militaries around the world are developing lethal autonomous weapons of different kinds, sometimes even in partnership with big tech companies,” says the Senior Research Fellow from UNSW Arts & Social Sciences.

“It’s a big question – what does it mean to hand over some of the decision making around violence to machines, and everybody on the planet will have a stake in what happens on this front.”

Military technologies that are close to being autonomous are already in the field. With the help of drones, distant battles can increasingly be fought from control stations full of screens and interfaces, he says.

'...the threshold that needs to be crossed is not so much a technological one at this point, it’s a moral one or a strategic one.'

The real transformation, however, is in ‘predicting’ threats – and eliminating them – before they arise.

“With the US military, in particular, lethal drone strikes are carried out based on a determination of whether someone or some group of people might become a threat ­– that is, they might do [harm] in the future,” he says.

“If your threat is someone pointing a gun in your face, the threat is pretty imminent. But someone driving a car around in rural Afghanistan with a rice cooker in the back and a bag of nails … you might have some data points that would suggest the person is going to create an improvised explosive device that could put troops at risk.

“In that instance, the threat is several steps removed from what you’re observing. The person might also just have a new rice cooker and a broken fence to fix. So, you have the potential to kill someone based on a prediction about what might come about, rather than based on anything they’re specifically doing at the time.”

While militaries are turning to machine-learning algorithms to identify threats before they happen, they’re yet to hand over control to the machine. But Dr Richardson says, that isn’t too far a leap.

“The move towards killing that is intensely predictive is certainly happening, and it’s a very scary development,” he says. “We would have the technological capacity in many instances to take human decision making out of the process and to push those predictions, to the forefront.

“We haven’t necessarily granted decision making power to those technologies to fire a weapon, but the threshold that needs to be crossed is not so much a technological one at this point, it’s a moral one or a strategic one,” he says.

Regulating lethal autonomous weapons

Dr Richardson says global governance and arms control are among several measures we need to take to restrict lethal autonomous weapons.

“While that didn’t stop the development of nuclear weapons, widespread public opposition could help to hold the development of lethal autonomous weapon systems, or to slow down the production of the technology at least,” he says.

Dr Michael Richardson

Associate Professor Dr Michael Richardson co-directs the Media Futures Hub and Autonomous Media Lab, and is an Associate Investigator with the ARC Centre of Excellence on Automated Decision-Making + Society.

“But at the moment, lethal autonomous weapon systems are not currently regulated under the Convention on Certain Conventional Weapons. In fact, there’s not even an accepted and agreed definition of what a lethal autonomous weapon system is.”

He says Australia is ‘infrastructurally complicit’ in the development of such technologies with its partnerships with the US military and could push other countries to be more accountable.

“We might claim that there’s little we can do to influence the main players in this space, like the US, China and Russia. Nevertheless, Australia is part of that world system.

“While it’s happening ‘over there’ in terms of kinetic violence, we’re an integral part of this,” he says. “The kinds of methods of surveillance and control [that] start in military spaces often expand elsewhere, and we’ve certainly seen that since 9/11 with surveillance in particular.”

Dr Richardson believes the widespread public opposition to nuclear weapons, as well as facial recognition software, shows that it is still possible to slow the development of lethal autonomous weapons.

“We’ve gone from widespread adoption and development of facial recognition technologies to major tech companies undertaking a moratorium on their development. That’s a result of sustained public pressure, but also critical pressure from academics and from advocacy groups, who have shown … they’re deeply problematic and result in greater injustice rather than more justice and safety.

“If we can create that push around facial recognition, perhaps we can also push against lethal autonomous weapons. They rely on similar kinds of technologies of automatic detection and recognition but make far more disastrous choices than whether a person gets a job or not, whether they’re allowed inside a venue or not.

“They might be choices over life and death.”