And AI will soon be getting closer to you–much closer. Qualcomm, to name just one, has designs to make its chipsets powerful enough to put machine learning into phones. In August they bought a Dutch start-up called ‘Scyfer’ in a bid to achieve their goal.
The thread happening in parallel with this progression is far less positive. As AI capabilities race ahead, we’re seeing more and more stories about what can go wrong with it.
We quickly (and hilariously) learned how easy it is to game a system when its designers don’t control the inputs that feed its models.
In March 2016, it took only hours for users to learn how to game Microsoft’s social media experiment ‘Tay’.
The Tay experiment looks like fun and games, but adversarial attacks against AI models, colloquially known as ‘salting’, command serious research. In the last month alone, we’ve learned that attackers can trick an AI into misclassifying a turtle as a rifle, and that image classifiers can be tricked with changes as small as a single pixel.
The racial bias in old training model corpuses means new AIs are more likely to identify ‘white’ with ‘attractive’, associate ‘doctor’ with ‘male’, or misclassify black people as gorillas.
In the video, Google offers three classifications of bias:
Interaction bias—users teach the model their biases
Latent bias—training data bias that goes unnoticed
Selection bias—a given population is over-represented in the training data
We don’t know, because the models are hidden in the black box.
However, when AI’s boosters tell us that machines can decide who should be arrested, and police forces (for example) believe them, it’s important that we understand the ethics behind the models.
In her 2017 Boyer Lectures, Professor Genevieve Bell, arguably the world’s most prominent technology anthropologist, said “most new technology is accompanied by utopian and dystopian narratives”.
The narratives on either side rarely come true as we expect, partly because technology isn’t the faceless force we imagine it to be. People are still behind technology in some capacity – whether creating it, using it, or establishing the regulations that govern it.
Genevieve was talking about fears of a “robot apocalypse” when she connected technology to ethical debates, but given that the capabilities of robots in the future will be built on AI, it’s worth quoting her here.
“We need to invest in hard conversations that tackle the ethics, morality and underlying cultural philosophy of these new digital technologies”, said Genevieve in her final Boyer lecture.
“I’d argue that in the future we’re building, there’s little dignity in a life that’s shaped by algorithms, in which you have no say, and into which you have no visibility.”
Whatever ethics the machines inherit will come from us. It’s our task to defeat our own biases to ensure the machines don’t inherit them. And it’s our responsibility to take control and demand transparency and accountability.
What can go wrong with AI?
About the author: Shara Evans is recognized as one of the world’s top female futurists. She’s a media commentator, strategy adviser, keynote speaker and thought leader, as well as the Founder and CEO of Market Clarity.
Get in touch now to check Shara’s availability to speak at your next event. Shara works closely with her clients to ensure all her presentations are tailored to your event – ensuring maximum impact for your business.