Put away the fake glasses

In a previous post, I mentioned how pranksters “gamed” Microsoft’s experimental ‘Tay’ chatbot, turning it from its intended persona of an innocent teen into a foul-mouthed Nazi.

A very crude example of a bigger issue for machine learning and artificial intelligence: what’s called the ‘adversarial model’.

Put simply, adversarial models attempt to do what the Tay pranksters did, but more systematically and with greater sophistication.

And, unlike Tay (who merely embarrassed Microsoft), work on adversarial modelling is serious business which can undermine the security applications of AI.

Vision systems

The most current and familiar examples of adversarial AI are attacks on vision systems. These systems are becoming more pervasive—facial recognition unlocks computers or phones, opens doors, and is a key tool of the surveillance state. It’s no surprise there’s a lot of work going on to attack the models.

At the simplest level, academic researchers and activists hope to defeat facial recognition by either resisting recognition, or undermining the underlying models. Interestingly, both activists and academics have at least one objective in common: beating the system with minimal work.

The activist approach is straightforward: your face can’t be recognised if it can’t be seen–but wearing a Guy Fawkes mask to every rally, invites other kinds of attention, as would blinding cameras with bright LEDs (neither of which count as adversarial models since they defeat image capture, rather than analysis).

It’s much better to use something less noticeable. Late last year, a group of researchers published research in which tortoise shell spectacles fooled facial recognition systems’ AI models, causing the system misidentify people wearing the glasses.

Adding ‘noise’ to the image made the AI think a panda was a gibbon. Credit: Open AI

In the physical world, it meets both the academic and activist requirement for a minimalist attack. More importantly, for this discussion, it’s an example of a true adversarial attack—one that lets the system capture facial information, but tricks the model trying to analyse the image.

This post at Open AI offers a very good explanation of adversarial models attacking AI vision systems. “Starting with an image of a panda, the attacker adds a small perturbation that has been calculated to make the image be recognised as a gibbon with high confidence”.

In this case the “small perturbation” is an overlay image of random noise, which confuses the AI and makes it identify a panda as a gibbon. Another recent experiment fooled Google’s AI much more completely. A cat was confused for guacamole, and a turtle was confused for a rifle.

Google’s AI thinks the perturbed cat (left) is guacamole; a little rotation (right) and it sees the cat. Credit: LabSix

That experiment also illustrated the limits of an adversarial attack, since the perturbed image of a cat only had to be rotated a little and Google’s AI identified it correctly.

The danger from image-based AI attacks could, if carried into the real world, see an airport security scanner wave-through a gun because it thinks it’s a turtle. We’re not quite there yet— researchers could only fool Google’s system because they had enough access to the model.

At the interface between the real world and the model, another important question is: what’s the smallest perturbation needed to defeat the AI? Because smaller perturbations are easier to introduce without notice.

One possible answer was provided by researchers from Japan’s Kyushu University, who beat a Deep Neural Network by changing a single pixel. If their work could be applied in the real world, maybe something as unobtrusive as a bright ear-stud could fool the machines behind the cameras.

Similar work is happening to attack speech recognition AI.

I will talk more about this in the future.

About the author: Shara Evans is recognized as one of the world’s top female futurists. She’s a media commentator, strategy adviser, keynote speaker and thought leader, as well as the Founder and CEO of Market Clarity.

Looking For a Dynamic Speaker?

Get in touch now to check Shara’s availability to speak at your next event. Shara works closely with her clients to ensure all her presentations are tailored to your event – ensuring maximum impact for your business.

BOOK SHARA
2018-01-03T11:25:57+00:00

Leave A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.