OPINION: Leading researchers in robotics and artificial intelligence (AI) from Australia and Canada have today published open letters calling on their respective Prime Ministers to take a stand against weaponising AI.

The letters ask that Australia and Canada be the next countries to call for a ban on lethal autonomous weapons at the upcoming United Nations (UN) disarmament conference, the strangely named Conference on the Convention on Certain Conventional Weapons (CCW) to be held in Geneva later this month.

To date, 19 countries have called for a pre-emptive ban on autonomous weapons: Algeria, Argentina, Bolivia, Chile, Costa Rica, Cuba, Ecuador, Egypt, Ghana, Guatemala, Holy See, Mexico, Nicaragua, Pakistan, Panama, Peru, State of Palestine, Venezuela and Zimbabwe.

Before Terminator

Lethal autonomous weapons are often described as “killer robots”. This paints a deceptive picture in most people’s minds.

We’re not talking about a movie-style Terminator, but rather much simpler technologies that are potentially only a few years away. Think of a predator drone flying above the skies of Iraq but replace the human pilot with a computer. Now, a computer could make the final life or death decision to fire its Hellfire missile.

I’m most worried not about smart AI but stupid AI. We will be giving machines the right to make such life-or-death decisions, but current technologies are not capable of making such decisions correctly.

In the longer term, autonomous weapons will become more capable. But my concern then shifts to how such weapons will destabilise the geopolitical order and ultimately become another weapon of mass destruction.

The Australian letter was released simultaneously with one signed by hundreds of AI experts in Canada, including two of the founders of Deep Learning, AI pioneers Geoffrey Hinton and Yoshua Bengio. The Canadian letter urges Prime Minister Justin Trudeau to support such a ban.

In the interest of full disclosure, I organised the Australian letter. It is signed by a dozen or so Deans and Heads of Schools, as well as dozens of professors of AI and robotics. In total, 122 faculty members working in AI and robotics in Australia have signed the letter.

The letter says lethal autonomous weapons lacking meaningful human control sit on the wrong side of a clear moral line. It adds:

To this end, we ask Australia to announce its support for the call to ban lethal autonomous weapons systems at the upcoming UN Conference on CCW. Australia should also commit to working with other states to conclude a new international agreement that achieves this objective.

In this way, our government can reclaim its position of moral leadership on the world stage as demonstrated previously in other areas like the non-proliferation of nuclear weapons.

With Australia’s recent election to the UN’s Human Rights Council, the issue of lethal autonomous weapons is even more pressing for Australia to address.

Support is growing

The AI and robotics communities have sent a clear and consistent message over the past couple of years about this issue. In 2015, thousands of AI and robotics researchers from around the world signed an open letter released at the start of the main AI conference calling for a ban.

Most recently, industry joined the call when in August this year more than 100 founders of AI and robotics companies warned of opening “the Pandora’s box” and asking the UN to take urgent action.

The UN is listening and taking action, though like all things diplomatic, progress is not rapid. In December 2016, after three years of informal talks, the UN decided to begin formal discussions within a Group of Governmental Experts. As the name suggests, this is a group of technical, legal and political experts chosen by the member states to make recommendations about autonomous weapons that could contribute to but not negotiate a treaty banning their use.

This group meets for the first time in Geneva next Monday. They will discuss topics such as should autonomous weapons always have “meaningful human control”? and what does this mean in practice?

An AI arms race

The international non-government body Human Rights Watch has invited me to the meeting and I will speak about the dangers of not taking action to ban autonomous weapons. Without a ban, there will be an arms race to develop increasingly capable autonomous weapons.

This has rightly been described as the third revolution in warfare. The first revolution was the invention of gunpowder. The second was the invention of nuclear bombs. This third revolution would be another step change in the speed and efficiency with which we could kill.

For these will be weapons of mass destruction. One programmer will be able to control a whole army. Every other weapon of mass destruction has been banned or is in the process of being banned: chemical weapons and biological weapons are banned, and a nuclear weapons treaty recently reached the 50 signatures required to become law. We must add autonomous weapons to the list of weapons that are morally unacceptable to use.

We cannot stop AI technology being developed. It will be used for many peaceful purposes like autonomous cars. But we can make it morally unacceptable to use to kill, as we have decided with chemical and biological weapons.

This, I hope, will make the world a safer and better place.

Toby Walsh is Scientia Professor of Artificial Intelligence at UNSW, Guest Professor at TU Berlin, and leader of the Algorithmic Decision Theory group at Data61, Australia's Centre of Excellence for ICT Research.

This article was originally published on The Conversation. Read the original article.