Digital Assistants
“In a perfect future, our AI virtual assistant will know what we’re doing, where we’re going and, most importantly, what we’re saying”
wrote Computerworld’s Mike Elgan in his article, Wanted: World where virtual assistants help without being asked.
Thankfully, this dystopia is still a long way off, but Elgan’s words perfectly articulate the industry vision. His article goes on to discuss the obvious issues around privacy, acknowledging that the ‘public isn’t ready to be spied on all day by the companies that make virtual assistants.’
There is, however, a question more important than public acceptance: Can Google, Apple, Microsoft, Facebook, Amazon and their rivals, convince us that a constant, omniscient companion is in our best interests?
Your life for sale
It’s no longer unjustified to worry about the depth of data the hyperscale tech companies collect. Facebook’s data skills are so uncanny that they have been accused of listening to users via smartphone microphones. Facebook denied the accusation.
Dangerous undercurrents are also emerging due to Facebook’s comprehensive analytics, such as overlooking users attempts to obscure their identities. Sex workers have accused Facebook of linking their real identities to their working identities, putting their lives at risk.
Facebook’s ‘Like’ button is now a universal online feature —news, shopping, social, ratings or maps — it’s everywhere. And it carries a cookie that tells Facebook where users go, even if they don’t interact with the button. It’s not hard to imagine how ‘Shara Evans, futurist’ could be associated with ‘Shara Evans, secret Antifa hacker’ (a made up association).
Adding an all-listening digital assistant with extensive permissions will expand the data and multiply the risks, far beyond exchanging personal information for a free service — the original premise of a decade ago.
Advertisers are a risk multiplier: the more data they can access from each platform (excluding Apple, which promises not to sell data), the less privacy the user retains, and the greater the risk the data will be leaked or breached.
And the rich, intimate data collected by an always-listening, all-analysing digital assistant will be irresistible to governments, already fighting against users’ right to encryption.
If you ask Siri or Google a question, most countries treat your query as ‘beyond the scope of metadata retention’. The non-content chatter, however, is caught: when you speak, your phone seeks out the IP address of the service and initiates the session. Your phone’s location is also considered metadata.
It’s a safe bet that somewhere in the world, someone is researching the legal implications of what might be inferred by your interactions with a digital assistant.
Privacy protection is inadequate and internationally fragmented — regardless of who is trying to access your data. Can Australians, for example, rely on National Privacy Principles that American platforms don’t recognise?
And, there are also implications for businesses. If an always-listening digital assistant captures business conversations, which are then on-sold to third parties, who is responsible for leaking advance warnings of mergers or acquisitions that are being contemplated in a board room? This issue goes well beyond the privacy of individuals, and is something that should be publicly discussed and debated now — before the genie is out of the bottle.
The “perfect future” rests on gaining user privacy and security, not trying to recover them if things go wrong be recovered after things go wrong.
About the author: Shara Evans is recognized as one of the world’s top female futurists. She’s a media commentator, strategy adviser, keynote speaker and thought leader, as well as the Founder and CEO of Market Clarity.
Leave A Comment
You must be logged in to post a comment.