Handing the Keys Over to Autonomous Weapons

For many years, the TV and film industry has been determined to scare us with fictional horror stories that show the unforgiving aftermath of technology taking over.

The Terminator franchise with its Skynet nemesis probably resonates the most with people, and more recent examples include sci-fi flick Ex Machina and cult TV show Black Mirror, which show technology one upping us in extreme, distressing and manipulative ways.

Futuristic android face with digital virtual HUD elements. Closeup mechanical eyes of a creative robot. High-tech artificial intelligence on blue background.

Hollywood’s attempts to terrify us with tech stretch back much further. 1970’s Colossus: The Forbin Project, based on the book Colossus, tells the story of an advanced computer designed by the US Government to control the US and Allied nuclear weapon systems. In the opening scene, the presidential looking Gordon Pinsent triumphantly declares that Colossus is the “perfect defence system…and it has no emotions.”

Still basking in the new-found security created by the computer, it quickly becomes clear that Colossus is more sophisticated than envisaged. It identifies Guardian, a similar computer built by the Russians and demands to be connected to it. This connection transforms the two into an autonomous super computer that enslaves the world’s population to its will.

We’re now at the stage where threats from technology are very real. As we move closer to a time when AI and autonomous weapons really could pose a threat to our way of life and very existence, are there any preventative measures being put in place?

We’re building killer robots

Killer robots, or ‘lethal autonomous weapons’ to give them their correct title, are almost here.

Reuters recently reported predictions from UK intelligence officer John Bassett, who claims that the US Army will have more combat robots than human soldiers by 2025. That’s eight years from now. These robots will be carried by driverless trucks and autonomous drone ships. The latter has already been created for both exploratory missions and warfare purposes.

The US is not alone – China, Russia, the UK and Israel are among the list of countries reportedly investing in autonomous defence strategies.

On one hand, it’s a natural progression from drone warfare which has accelerated at an enormous rate in recent years. On the other, it’s crossing a whole new line. Drone planes are still controlled by a human, just one potentially thousands of miles away. AI robots remove the human from the equation, and let’s face it – that’s when it all starts to go wrong according to Hollywood science fiction.

There is a movement against the development of these weapons. AI and robotics researchers, led by professor of artificial intelligence at the University of New South Wales Toby Walsh, are promoting an open letter urging a worldwide ban. At the time of writing, more than 20,800 people had signed this letter, including Stephen Hawking, Elon Musk and Steve Wozniac.

The fear is that such weapons will be pitched and sold as devices that can take soldiers out of harm’s way, but the end result will be a global arms race and fully independent robots that can kill, which are likely to be just as available to violent extremists as they are to global powers.

Robot Futuristic Mech weapon with full array of guns pointed

Invitation to invade

Moreover, if there’s one thing we’ve learned in this decade, it’s that nothing is safe from hacking. Swathes of cyber-attacks have plagued some of the biggest organisations in the world. In 2013, US retail giant Target experienced one of the worst hacks to date which saw 110 million customer records breached, including around 40 million with credit card details.

In October of last year, a simple distributed denial of service (DDoS) attack took down some of the world’s most popular websites, including Netflix, Twitter, CNN, PayPal and Reddit. Somewhat ironically, in Colossus: The Forbin Project, technicians attempt to overload the super computer’s circuits with information to bring it down, effectively performing a DDoS attack.

The cyber-attacks we’ve witnessed are bad, possibly terrible if your credit card or personal details are involved or if it’s your company that’s attacked. However, they’re not lethal, not a fundamental threat to our very existence. Autonomous killing machines are exactly that.

We’re slowly opening a Pandora’s box, and attention must be paid. Technology has the potential to bring major benefits to all industries and aspects of how we live our lives, but when it comes to arming autonomous weapons, will this truly allow technology and humanity to safely co-exist? Or might we end up in a future we don’t want to live in? Technology needs to be kept on our terms. It’s an important dialogue that we – as a society – need to have.

About the author: Shara Evans is recognized as one of the world’s top female futurists. She’s a media commentator, strategy adviser, keynote speaker and thought leader, as well as the Founder and CEO of Market Clarity.

This post is a joint article co-written with Robert Linsdell, Managing Director Australia and New Zealand, Vertiv; which originally appeared in The Australian.

Looking For a Dynamic Speaker?

Get in touch now to check Shara’s availability to speak at your next event. Shara works closely with her clients to ensure all her presentations are tailored to your event – ensuring maximum impact for your business.



  1. Christopher Skinner October 17, 2018 at 7:47 pm

    There is an antidote to autonomous weapons just as there is to counter the dangers in any other automation or robotic system. Fundamentally the design of all such systems must provide for independent authority to approve its behaviour. Such independent oversight may well be another automous system that has embedded a reference standard previously reviewed and approved by competent human authority. This framework has direct analogies with current constitutional law and practice. Banning autonomous weapons will fail as did the opposition to the industrial revolution. Designing and mandating checks and balances can be made to work.

  2. Shara Evans October 21, 2018 at 4:10 pm

    Christopher – I understand your perspective, but in my view – even with the extra controls your proposing there’s still too great a potential for disastrous unintended consequences. It’s very easy to fool an AI into thinking an image is something it’s not – all you need to do is jiggle less than 1% of the pixels in an image, and you can trick an AI. In doing so, you can turn an image of a commercial airliner into an ICBM missile. To the human eye, it will still look like a commercial plane, but to an AI, it will look like a nuclear threat. The problem has to do with the use of linear functions in teaching AI to recognize images. Here’s a link to an academic paper on this topic, with lots of examples: http://karpathy.github.io/2015/03/30/breaking-convnets/ – Right now, the only way to avoid this type of mis-classification is to keep a human in the loop.

Leave A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.