While the effect of biased models can sometimes be seen from the outside, it’s not always the case. What types of biases exist in models claiming to associate facial characteristics with criminality or homosexuality?
We don’t know, because the models are hidden in the black box.
However, when AI’s boosters tell us that machines can decide who should be arrested, and police forces (for example) believe them, it’s important that we understand the ethics behind the models.
In her 2017 Boyer Lectures, Professor Genevieve Bell, arguably the world’s most prominent technology anthropologist, said “most new technology is accompanied by utopian and dystopian narratives”.
The narratives on either side rarely come true as we expect, partly because technology isn’t the faceless force we imagine it to be. People are still behind technology in some capacity – whether creating it, using it, or establishing the regulations that govern it.
Genevieve was talking about fears of a “robot apocalypse” when she connected technology to ethical debates, but given that the capabilities of robots in the future will be built on AI, it’s worth quoting her here.
“We need to invest in hard conversations that tackle the ethics, morality and underlying cultural philosophy of these new digital technologies”, said Genevieve in her final Boyer lecture.
“I’d argue that in the future we’re building, there’s little dignity in a life that’s shaped by algorithms, in which you have no say, and into which you have no visibility.”
Whatever ethics the machines inherit will come from us. It’s our task to defeat our own biases to ensure the machines don’t inherit them. And it’s our responsibility to take control and demand transparency and accountability.
What can go wrong with AI?