“Deep Fake” Pedestrians: Real Safety For Autonomous Vehicles
Spotting pedestrians is one of the smart city’s most difficult and important problems. To keep people safe in a world of autonomous cars and robots, systems must be able to tell the difference between “Bruce” and a vertical post.
If an autonomous vehicles can identify and react to a pedestrian from far away, the ongoing “trolley problem” might be solved. The difficult decision between protecting the driver or countless pedestrians in an accident, will be void. However, it still wouldn’t solve the ethical issues involved in deciding who might get hurt in an unavoidable multi-vehicle collision.
Researching possible solutions to the issue of early detection, a group from Northumbria University and Imam Mohammed ibn Saud Islamic University in Saudi Arabia have delivered a brief academic paper — Deep Learning based Pedestrian Detection at Distance in Smart Cities. The researchers believe they can vastly improve pedestrian detection with a combination of deep learning, the emerging discipline of adversarial networks and the right kind of detector.
The problem of pedestrian detection became sharply real in 2018, when a car taking part in Uber’s self-driving vehicle trial collided with, and killed, a pedestrian in Tempe, Arizona. As The Guardian explained in May 2018, America’s National Transport Safety Bureau believed Uber’s vehicle detected the pedestrian, but did not brake or swerve because its systems were tuned to avoid false positives.
It’s not just Uber that have reported fatalities during autopilot testing. A blog post on Tesla’s website shares accident statistics that state: ‘there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with autopilot hardware’.
Locating pedestrians at a distance is a problem that carries the highest stakes. So, how do we do it?
The paper illustrates an interesting development in AI, one that attracts headlines mostly for its misuse — “generative adversarial networks”.
I won’t bog you down in heavy technical details, but adversarial models have been interesting ever since they first gained public attention as a way to attack machine learning systems. As this Wikipedia entry explains, conventional machine learning systems assume a “stationary and benign” environment with predictable statistical distribution.
If you feed a static system with the right malicious input, it produces false results. Here are just a couple of examples:
“Generative adversarial networks” (GANs) add a feedback ability that helps improve the adversarial performance. Two neural networks compete with each other, one trying to create candidates (such as fake images) that will fool the other into accepting its work as genuine. This approach has been around for decades — Wikipedia dates the earliest theory back to 1990.
However, the last 10 years has seen a huge increase in computer power. GANs have now entered the public imagination, mostly for their evil application as the origin of “Deep Fakes”.
The research presented in Deep Learning based Pedestrian Detection at Distance in Smart Cities shows GANs being used for good. That is, the adversarial approach that the researchers use deep convolutional GANs (DCGANS) for creates fake images of pedestrians, to improve on the low-resolution images picked-up from a distance by vehicle sensors.
The DCGAN puts the “evil” deep-fake technique to work in the service of good. It takes a low-resolution image of a distant pedestrian and applies the deep fake technique to the data to allow. the vehicle’s systems can locate the pedestrian in the image.
Given that the research passes peer review, the approach works well — a single-shot detector that identifies 35.5% of distant pedestrians unaided could, with the DCGAN’s help, pick out distant pedestrians in 80.7% of the images presented to it.