The History of Safety Regulation is Written in Blood
Since I last discussed self-driving cars, an Uber vehicle was involved in a fatal accident in Tempe, Arizona.
The video released by local police suggests driver inattention contributed to the crash. It was clear from the video that the driver wasn’t looking at the road when the accident occurred.
The accident in Arizona has made it much harder to take an optimistic view about regulations and whether they can be left to evolve alongside the development of self-driving cars.
And, if Ars Technica’s report, highlighting huge gaps in autonomous vehicle ‘success’ standards, is correct, it doesn’t matter who leads and who lags. What matters is that we’re leaving the industry to itself, with too little regulation, able to make fatal mistakes.
We’ve all heard the saying, ‘safety standards are written in blood’, in reference to a reactive, rather than proactive approach to safety regulations. So, why is the world so determined to see the same fix-as-we-go attitude towards the safety standards of self-driving cars?
We know that autonomous vehicles are running pre-beta software and that trials are underway. We also know that governments have the power to regulate what happens on roads—it’s one of the few aspects of life that remains uncontroversial in the developed world.
In any case, Silicon Valley’s popular ‘fail fast’ doctrine cannot apply when roads are involved.
Even if autonomous vehicles are a safer alternative in the medium term, the industry must prioritise safety standards.
Regulation supposedly inhibits innovation; outright bans on street trials for self-driving cars would not only inhibit innovation, they would stifle it for years.
There is plenty of innovation in one of the most-regulated industries in the world: aviation, that perhaps the autonomous vehicle industry could look into further and avoid a crippling crisis of public confidence?
The Facebook debate
I expected to have an abrupt change of topic at this point, but it turns out not to be so. Facebook and Cambridge Analytica have raised a debate of a different kind.
In such a vast malfeasance and betrayal of trust, it feels inadequate to focus on one thing. However, the quality and transparency of AI models is at least a start.
Facebook should be transparent about the data it collects and how its models work. There transparency must also satisfy a public who, currently, are only dimly aware of what’s going on. That’s not going to be easy.
The same public has, after all, cheerfully okayed the most horrifying permissions when connecting to Facebook from Android phones.
Whatever the defense offered by those involved, it’s clear that exploitative behavior is the norm, even if people have never touched Facebook themselves.
An industry that considers it okay to drag in call metadata of third-parties who have never agreed to Facebook’s terms of service can no longer claim “innovation immunity” to regulation. Google, as the enabler (by creating a privacy model that remains overwhelmingly take-it-or-leave-it), is in the same position: there’s no argument left. If it won’t implement strong privacy and transparency—not ‘send around a press release’, actually implement it, Google’s moral authority won’t save it.
Did I mention that Google’s former Australian managing director, now a bank executive, thinks Australia should hand over their Medicare data to startups so they can build apps? She did. You can’t make this up.
About the author: Shara Evans is recognized as one of the world’s top female futurists. She’s a media commentator, strategy adviser, keynote speaker and thought leader, as well as the Founder and CEO of Market Clarity.
Leave A Comment
You must be logged in to post a comment.