Australia passed its disastrous "Access and Assistance" legislation - a political farce with terrible consequences for society and the technology industry - and I firmly believe that this legislation is attempting the impossible.
About Shara EvansThis author has not yet filled in any details.
So far Shara Evans has created 54 blog entries.
In order to make cities liveable with so many people, we need to use smart technologies like sensors to help us plan and maintain them.
Ask a search engine what the world has to say about AI ethics, and you're guaranteed a plethora of scholarly articles, blog posts, magazine features, and images—some animated, illustrating the ‘trolley problem’.
Transparency and consent are necessary components of ethical data handling, but what about respect? On Monday morning, Australia's Digital Health Agency began defending the security of the Government's MyHealth Record system, but by midday the agency was concerned with more immediate matters: users inundating its online and telephone systems.
We’ve all heard the saying, ‘safety standards are written in blood’, in reference to a reactive, rather than proactive approach to safety regulations. So, why is the world so determined to see the same fix-as-we-go attitude towards the safety standards of self-driving cars?
We’re already at risk of having our personal information used against us, while the collection and cross-indexing of our data expands year-on-year. We need to elevate privacy and data protection to the political sphere and keep it there.Since 2018 began, the IT industry has watched, in horror, the slow-motion train wreck of Meltdown/Spectre vulnerabilities. Why? Because these are hardware bugs and they’re harder to deal with than a slip in a C++ library. What does this mean for driverless cars?
Driverless vehicles remain one of the tech industry's favourite futuristic scenarios, fuelling a daily run of announcements, partnerships and promises.Since 2018 began, the IT industry has watched, in horror, the slow-motion train wreck of Meltdown/Spectre vulnerabilities. Why? Because these are hardware bugs and they’re harder to deal with than a slip in a C++ library. What does this mean for driverless cars?
It almost reads like a choose-your-own-adventure. You can choose between a future in which robots are pole dancers for men to ogle, or a future where sexbots are hacked to kill their owners.
The most current and familiar examples of adversarial AI are attacks on vision systems. These systems are becoming more pervasive—facial recognition unlocks computers or phones, opens doors, and is a key tool of the surveillance state. It’s no surprise there's a lot of work going on to attack the models.
“In a perfect future, our AI virtual assistant will know what we're doing, where we're going and, most importantly, what we're saying" wrote Computerworld's Mike Elgan in his article, Wanted: World where virtual assistants help without being asked.Thankfully, this dystopia is still a long way off, but Elgan’s words perfectly articulate the industry vision. His article goes on to discuss the obvious issues around privacy, acknowledging that the ‘public isn't ready to be spied on all day by the companies that make virtual assistants.’