AI Ethics

Ask a search engine what the world has to say about AI ethics, and you’re guaranteed a plethora of scholarly articles, blog posts, magazine features, and images—some animated, illustrating the ‘trolley problem’ (an ethics thought experiment).

There’s also clickbait constructed on the back of scare stories about bots are becoming sentient—a conclusion that takes hold in both the tech press and more mainstream outlets because there’s little chance for outsiders to understand what’s going on inside the AI ‘black box’.

It got me wondering: what common threads might underlie the more obvious and popular questions relating to AI ethics.

If there are such things as fundamental questions in AI ethics, I’d like to offer transparency as a candidate: Do you understand how your AI code (using massive deep learning datasets) reaches a decision, and can you explain it to the average person well enough for them to have enough knowledge to participate in decisions about its role and implementation?

If you’d like that expressed in more scholarly language, Milo Honegger has a paper on arXiv which says, succinctly: “…the necessity for plausibility and verifiability of predictions made by these black boxes is indispensable”.

Take, for example, this list of AI ethical issues published by the World Economic Forum. It covers unemployment, inequality, post-AI human behaviour, errors, bias, security, unforeseen consequences, human control, and ‘robot rights’.

All of these are touched upon, even obliquely, by the question of transparency—our understanding of what the AI is doing.

Does an AI owner have the right to offer “technology to replace humans” without understanding what’s going on in the black box? Is it right to risk human life (for example, in an autonomous vehicle) to an algorithm whose behaviours and operation can’t be explained?

There’s a similar debate around the relationship between computers and elections—a handy (and easily-explained) example of the need to understand what’s going on inside boxes.

In a paper-based election system like Australia’s, participation is available to most people, on most aspects of the vote. Beyond merely filling in the ballot, most citizens can take part in the processes of operating polling places, safely collecting the ballots, counting the votes, scrutineering (making sure the count is correct), and so on. They can participate, because it’s a simple system—simple enough to explain to everyone, regardless of their level of involvement.

The same can’t even be said about the best computer elections, because only experts can hope to explain what’s inside the black box. Even when the black box isn’t exploited to manipulate elections, electors are excluded from most of the process. They’re certainly excluded from understanding the process.

There’s plenty of evidence to suggest that our understanding of what’s in AI’s box, and the ability to explain it to those affected by it, fall short of any reasonable ethical standard.

In autonomous vehicles, for example, there’s an active debate about a car’s responsibility to its owner: does it risk killing the owner to save five pedestrians? Will different auto manufacturers apply different rules? Who’s responsible if someone is hurt or killed – the “driver” (who may not have a steering wheel or any way of controlling the vehicle), the car owner, the manufacturer, the software engineer who wrote the code? Will some people be deemed “more valuable” than others, and therefore be given a priority when it comes to unavoidable accidents? Who will make these decisions? And, how will the general population participate in this process?

If the decision isn’t transparent and, to quote Honegger again, “plausible and verifiable”, then an AI has no right to make that decision.

You can at least argue that a strong ethical position says AI cannot make this decision if its operation isn’t understood, and if its decisions can’t be explained by its authors/owners. And without those two critical characteristics, citizens cannot participate in decisions which affect their lives, their jobs, and ultimately their very participation in society.

About the author: Shara Evans is recognized as one of the world’s top female futurists. She’s a media commentator, strategy adviser, keynote speaker and thought leader, as well as the Founder and CEO of Market Clarity.

Looking For a Dynamic Speaker?

Get in touch now to check Shara’s availability to speak at your next event. Shara works closely with her clients to ensure all her presentations are tailored to your event – ensuring maximum impact for your business.