The sentient, murderous humanoid robot is a complete fiction, and may never become reality. But that doesn’t mean we’re safe from autonomous weapons – they are already here.
Lethal autonomous weapons systems demand careful consideration but nightmare scenarios of the future won't become reality anytime soon, says a UNSW Canberra military ethicist.
Like atomic bombs and chemical and biological weapons, deadly drones that make their own decisions must be tightly controlled by an international treaty.
A who’s who of CEOs, engineers and scientists from the technology industry have signed a global pledge – co-organised by UNSW’s Toby Walsh – to oppose lethal autonomous weapons.
An open letter by 116 tech leaders from 26 countries urges the United Nations against opening the 'Pandora's box' of lethal robot weapons.
Autonomous weapons have moved from science fiction to become a clear and present danger. But there is still time to stop them.
Attempts to define and prohibit autonomous weapon systems may hold back weaponry of significant value in improving international security, warns Jai Galliott.
Artificial intelligence expert and 'accidental activist' Toby Walsh will address the United Nations in New York this week to call for a ban on autonomous weapons.
The open letter signed by more than 12,000 prominent people calling for a ban on artificially intelligent killer robots is misguided and perhaps even reckless, writes Jai Galliott.