Ask a search engine what the world has to say about AI ethics, and you're guaranteed a plethora of scholarly articles, blog posts, magazine features, and images—some animated, illustrating the ‘trolley problem’.
We’ve all heard the saying, ‘safety standards are written in blood’, in reference to a reactive, rather than proactive approach to safety regulations. So, why is the world so determined to see the same fix-as-we-go attitude towards the safety standards of self-driving cars?
Driverless vehicles remain one of the tech industry's favourite futuristic scenarios, fuelling a daily run of announcements, partnerships and promises. Since 2018 began, the IT industry has watched, in horror, the slow-motion train wreck of Meltdown/Spectre vulnerabilities. Why? Because these are hardware bugs and they’re harder to deal with than a slip in a C++ library. What does this mean for driverless cars?
The most current and familiar examples of adversarial AI are attacks on vision systems. These systems are becoming more pervasive—facial recognition unlocks computers or phones, opens doors, and is a key tool of the surveillance state. It’s no surprise there's a lot of work going on to attack the models.
“In a perfect future, our AI virtual assistant will know what we're doing, where we're going and, most importantly, what we're saying" wrote Computerworld's Mike Elgan in his article, Wanted: World where virtual assistants help without being asked. Thankfully, this dystopia is still a long way off, but Elgan’s words perfectly articulate the industry vision. His article goes on to discuss the obvious issues around privacy, acknowledging that the ‘public isn't ready to be spied on all day by the companies that make virtual assistants.’
Stories of the rise of AI are all around us. The machine can beat a master at Go, and Facebook’s shopping smarts are so good, people believe the company is listening to their microphones. And AI will soon be getting closer to you–much closer. Qualcomm, to name just one, has designs to make its chipsets powerful enough to put machine learning into phones. But, as AI capabilities race ahead, we’re seeing more and more stories about what can go wrong with it.