OPINION: Representatives of UN member states, human rights groups and academia gathered at the Palace of Nations in Geneva, Switzerland, last week for the third Informal Meeting of Experts regarding Lethal Autonomous Weapons Systems, otherwise known as artificially intelligent "killer robots".

The meeting, chaired for the second time by Ambassador Michael Biontino of Germany, was presented to participants as an opportunity to deepen the understanding and discussions on these weapons with a view toward examining whether their rise can be accommodated within the existing Geneva Convention on Certain Conventional Weapons of 1949 or calls for amendments. However, little real progress has been made toward defining the term "autonomous weapon system", let alone reaching consensus on any form of regulation.

Some delegations, including many of the world's undeveloped nations, have stated that machines tasked with making life and death decisions without any human intervention would be in breach of international humanitarian law, unethical and possibly even pose a risk to humanity itself.

Others stressed that such systems do not currently exist and said that their governments have no intention of developing weapons systems of this nature.

The Australian delegation, along with the Canadian and the US delegations, simply reaffirmed their commitment to the existing legal framework for reviewing new weapons under Article 36 of Additional Protocol I of 1977 to the Geneva Conventions.

Nobody wants to hold back driverless cars that could one day spare us the monotony of the daily commute. Sadly, the logic is not as clear in the military case.

With varying degrees of diplomatic precision, they stated that they support and adhere to the obligation to undertake a review of any proposed new weapon, means or method of warfare to determine whether international humanitarian law or other applicable international law would, in some or all circumstances, prohibit its employment.

In other words, some groups strongly advocate a moratorium on the use of machines that can kill without human intervention, some believe that such systems may never exist, and others seem unclear as to whether some current day weapons such as "close-in weapons systems" – which are computer-controlled, radar-guided gun systems that can automatically fire thousands of rounds a minute at targets at short range in both defensive and offensive modes – might come to fall under any potential moratorium, despite assurances that these weapons are not within the scope of concern because they are currently overseen by a human at the moment of lethal action, despite the fact that this not strictly necessary.

Many delegations also underlined the dual-use character of the technology necessary for the development of autonomous weaponry and stressed the benefits of autonomous technologies in the civilian sphere. Delegations referred to the important contributions by civil society organisations, industry, researchers and scientific organisations to understanding the technical and legal challenges posed by LAWS.

The logic here is clear: nobody wants to hold back driverless cars that could one day spare us the monotony of the daily commute. Sadly, the logic is not as clear in the military case.

Much of the problem is that the Campaign to Stop Killer Robots, which popularised the debate in the news media and consists of human rights groups such as Article 36 and the International Committee for Robot Arms Control, has used alarmist language in bringing the issue to the forefront. However, now that they have succeeded in having the UN debate the issue, they have had to hedge their claims, resulting in a good deal of confusion as to the nature and technical capabilities of autonomous weaponry. 

The UN and its Institute for Disarmament Research have also allowed the members of these groups to dominate the thought process in previous years, allowing humanitarian concerns to trump legitimate discussion about how such concerns should be balanced with international security, skewing the debate in favour new international regulation that could curtail the development of weapons that promise to reduce physical and psychological injury to our troops and be more effective and efficient on the battlefield.

The worry is the view that it's better to do something rather than nothing will prevail in the absence of clarity, possibly to the detriment of advanced states. Many overestimate autonomous weapons in ascribing them near-human level capabilities and are concerned about the absence of meaningful human control that supposedly comes with artificial intelligence.

These people fail to recognise that while we can model the brain and decision-making to the point that these systems are capable of mimicking humans and finding solutions to killing people, sometimes in ways that seem on the surface to be "unpredictable", humans remain very much in meaningful control of this process, even if they are not present at the moment lethal action is executed. 

Indeed, it would be preposterous to overlook the role of programmers, engineers and others involved in building and maintaining these autonomous systems. No matter how "intelligent" they may become or how well they "learn", machines will always lack consciousness and genuine human-level decision-making capability, meaning that robots will never take lethal action entirely outside of human control, and that humans, and their interactions with machines, should always be the focus of our attention.

Attempts to define "autonomy" and "meaningful human control" in terms that are conducive to a complete prohibition on "killer robots" or marking some particular threshold might distract nations from taking local regulatory action and hold back weaponry of significant value in limiting the damaging human footprint of war and improving international security, especially when one contemplates their use in controlled environments away from civilians and civilian infrastructure.

Dr Jai Galliott is a defence analyst and expert on the ethical, legal and strategic implications of emerging technologies at UNSW.

This opinion piece was first published in the Sydney Morning Herald.