AI Ethics
Ask a search engine what the world has to say about AI ethics, and you're guaranteed a plethora of scholarly articles, blog posts, magazine features, and images—some animated, illustrating the ‘trolley problem’.
Ask a search engine what the world has to say about AI ethics, and you're guaranteed a plethora of scholarly articles, blog posts, magazine features, and images—some animated, illustrating the ‘trolley problem’.
Transparency and consent are necessary components of ethical data handling, but what about respect? On Monday morning, Australia's Digital Health Agency began defending the security of the Government's MyHealth Record system, but by midday the agency was concerned with more immediate matters: users inundating its online and telephone systems.
We’ve all heard the saying, ‘safety standards are written in blood’, in reference to a reactive, rather than proactive approach to safety regulations. So, why is the world so determined to see the same fix-as-we-go attitude towards the safety standards of self-driving cars?
We’re already at risk of having our personal information used against us, while the collection and cross-indexing of our data expands year-on-year. We need to elevate privacy and data protection to the political sphere and keep it there. Since 2018 began, the IT industry has watched, in horror, the slow-motion train wreck of Meltdown/Spectre vulnerabilities. Why? Because these are hardware bugs and they’re harder to deal with than a slip in a C++ library. What does this mean for driverless cars?
Driverless vehicles remain one of the tech industry's favourite futuristic scenarios, fuelling a daily run of announcements, partnerships and promises. Since 2018 began, the IT industry has watched, in horror, the slow-motion train wreck of Meltdown/Spectre vulnerabilities. Why? Because these are hardware bugs and they’re harder to deal with than a slip in a C++ library. What does this mean for driverless cars?
It almost reads like a choose-your-own-adventure. You can choose between a future in which robots are pole dancers for men to ogle, or a future where sexbots are hacked to kill their owners.
The most current and familiar examples of adversarial AI are attacks on vision systems. These systems are becoming more pervasive—facial recognition unlocks computers or phones, opens doors, and is a key tool of the surveillance state. It’s no surprise there's a lot of work going on to attack the models.
“In a perfect future, our AI virtual assistant will know what we're doing, where we're going and, most importantly, what we're saying" wrote Computerworld's Mike Elgan in his article, Wanted: World where virtual assistants help without being asked. Thankfully, this dystopia is still a long way off, but Elgan’s words perfectly articulate the industry vision. His article goes on to discuss the obvious issues around privacy, acknowledging that the ‘public isn't ready to be spied on all day by the companies that make virtual assistants.’
Stories of the rise of AI are all around us. The machine can beat a master at Go, and Facebook’s shopping smarts are so good, people believe the company is listening to their microphones. And AI will soon be getting closer to you–much closer. Qualcomm, to name just one, has designs to make its chipsets powerful enough to put machine learning into phones. But, as AI capabilities race ahead, we’re seeing more and more stories about what can go wrong with it.
3D printing has come a long way from the days of printing plastic prototypes. Today, multi-material additive printing using a range of materials, including metals, is becoming commonplace. What you may not be aware of are the rapid advances in the bio-fabrication of human tissues using 3D printing techniques. In this Future Tech video interview Futurist Shara Evans is speaking with Danny Cabrera, the co-founder and CEO of BioBots — a US start-up that sits at the intersection between computer science, biology and chemistry. They've designed 3D bio-printers and bio-inks that are unleashing a bio-fabrication revolution.