What can go wrong with AI
AI and machine learning is currently undergoing a curious paradox: its rise and fall are happening at the same time.
Stories of the rise of AI are all around us. The machine can beat a master at Go, and Facebook’s shopping smarts are so good, people believe the company is listening to their microphones. This belief has been denied by Facebook.
And AI will soon be getting closer to you–much closer. Qualcomm, to name just one, has designs to make its chipsets powerful enough to put machine learning into phones. In August they bought a Dutch start-up called ‘Scyfer’ in a bid to achieve their goal.
The thread happening in parallel with this progression is far less positive. As AI capabilities race ahead, we’re seeing more and more stories about what can go wrong with it.
We quickly (and hilariously) learned how easy it is to game a system when its designers don’t control the inputs that feed its models.
In March 2016, it took only hours for users to learn how to game Microsoft’s social media experiment ‘Tay’.
With the discovery that Tay’s models could be fed new inputs using the phrase “repeat after me”, an innocent teenage character soon became a foul-mouthed right-wing Hitler-fan.
The Tay experiment looks like fun and games, but adversarial attacks against AI models, colloquially known as ‘salting’, command serious research. In the last month alone, we’ve learned that attackers can trick an AI into misclassifying a turtle as a rifle, and that image classifiers can be tricked with changes as small as a single pixel.
The racial bias in old training model corpuses means new AIs are more likely to identify ‘white’ with ‘attractive’, associate ‘doctor’ with ‘male’, or misclassify black people as gorillas.
In the video, Google offers three classifications of bias:
- Interaction bias—users teach the model their biases
- Latent bias—training data bias that goes unnoticed
- Selection bias—a given population is over-represented in the training data
While the effect of biased models can sometimes be seen from the outside, it’s not always the case. What types of biases exist in models claiming to associate facial characteristics with criminality or homosexuality?
We don’t know, because the models are hidden in the black box.
However, when AI’s boosters tell us that machines can decide who should be arrested, and police forces (for example) believe them, it’s important that we understand the ethics behind the models.
In her 2017 Boyer Lectures, Professor Genevieve Bell, arguably the world’s most prominent technology anthropologist, said “most new technology is accompanied by utopian and dystopian narratives”.
The narratives on either side rarely come true as we expect, partly because technology isn’t the faceless force we imagine it to be. People are still behind technology in some capacity – whether creating it, using it, or establishing the regulations that govern it.
Genevieve was talking about fears of a “robot apocalypse” when she connected technology to ethical debates, but given that the capabilities of robots in the future will be built on AI, it’s worth quoting her here.
“We need to invest in hard conversations that tackle the ethics, morality and underlying cultural philosophy of these new digital technologies”, said Genevieve in her final Boyer lecture.
“I’d argue that in the future we’re building, there’s little dignity in a life that’s shaped by algorithms, in which you have no say, and into which you have no visibility.”
Whatever ethics the machines inherit will come from us. It’s our task to defeat our own biases to ensure the machines don’t inherit them. And it’s our responsibility to take control and demand transparency and accountability.
What can go wrong with AI?
About the author: Shara Evans is recognized as one of the world’s top female futurists. She’s a media commentator, strategy adviser, keynote speaker and thought leader, as well as the Founder and CEO of Market Clarity.
Leave A Comment
You must be logged in to post a comment.