If you’d like that expressed in more scholarly language, Milo Honegger has a paper on arXiv which says, succinctly: “…the necessity for plausibility and verifiability of predictions made by these black boxes is indispensable”.
Take, for example, this list of AI ethical issues published by the World Economic Forum. It covers unemployment, inequality, post-AI human behaviour, errors, bias, security, unforeseen consequences, human control, and ‘robot rights’.
All of these are touched upon, even obliquely, by the question of transparency—our understanding of what the AI is doing.
Does an AI owner have the right to offer “technology to replace humans” without understanding what’s going on in the black box? Is it right to risk human life (for example, in an autonomous vehicle) to an algorithm whose behaviours and operation can’t be explained?
There’s a similar debate around the relationship between computers and elections—a handy (and easily-explained) example of the need to understand what’s going on inside boxes.
In a paper-based election system like Australia’s, participation is available to most people, on most aspects of the vote. Beyond merely filling in the ballot, most citizens can take part in the processes of operating polling places, safely collecting the ballots, counting the votes, scrutineering (making sure the count is correct), and so on. They can participate, because it’s a simple system—simple enough to explain to everyone, regardless of their level of involvement.
The same can’t even be said about the best computer elections, because only experts can hope to explain what’s inside the black box. Even when the black box isn’t exploited to manipulate elections, electors are excluded from most of the process. They’re certainly excluded from understanding the process.