Turning a panda into a cat?!

Why adversarial examples are not scary…but are most likely useful

Sonja Georgievska
Netherlands eScience Center
4 min readJun 20, 2019

“Is it a bird? Is it a plane?” A child who never saw Superman might mistake him for a bird or a plane. On the other hand, who knows what a time traveler from the ’70s would see in this picture. Photo by 贝莉儿 NG on Unsplash

If you’ve had anything to do with deep learning recently, you’ve heard of them: adversarial examples.

The nightmare of deep neural networks. Every skeptic’s favorite example.

Adversarial examples are one way of showing the limitations of neural networks when someone asks “is this how human vision works”?.

In a nutshell, it is possible to change a few carefully chosen pixels in a picture of a panda, such that a network that was trained to distinguish pandas from cats will classify the picture of the panda as a picture of a cat. (Pandas are so cute, therefore they are used a lot in the data science world.)

In this example, the panda is confused for a gibbon, but cats are cuter than gibbons, so I use cats in the text

Now, note how I emphasized the word picture above. Why? Because that is what we give to the network as input: a picture, a matrix of pixels.

It learns patterns that appear frequently in the matrices of pixels (i.e. pictures) that represent a panda. A combination of patterns represents a category, like pandas (well, actually a hierarchical nonlinear combination of patterns, but those are details).

If none of the pictures of pandas used for training contained ‘weird’ pixels, the network will not learn that pictures may come with weird pixels. (Just think of all the funny mistakes that your children were making when learning various things at age 3… well, funny to you, not to them!)

That said, if at prediction time you give a picture with weird pixels, the network might give anything for an answer — because it has to give one. And if you have two categories and you chose the pixels to alter carefully (it is very easy actually), voila — it will predict that your panda is a cat.

“I better watch the road and not those weird pixels” - a well-trained biker. Photo by Nikita Ignatev on Unsplash

Truth is in the eye of the beholder

Wait a minute. We asked the network if it saw a picture of a panda, not an actual panda. Did the network say it was an actual picture of a panda? No. Correct answer. Why? Because we did not give it an actual picture of a panda. It was a digitally manipulated picture. Fake. A human will not recognize that that was not an actual picture of a panda. But the network said “No” and in a sign of a protest it answered “ A cat!”. (The last is a joke, obviously.)

So, garbage in, garbage out.

Why is this not scary? Actually, why do people think it is scary? Supposedly, in computer vision, one might use carefully digitally manipulated image to fool the network. True. But if you are using computer vision (thus automated) instead of human vision, and if I am a hacker, then I can fool you in much easier ways. If I want to use your smartphone that uses facial recognition to unlock itself, I don’t have to digitally alter an image of me so that the network thinks it is you. I will show the network an actual image of you.

Ok. But what about the changed reality rather than changed pictures? Namely, another favorite example is the altered stop sign at a crossroad that might trick a self-driving car into going straight and crashing, while a human driver would still recognize the sign.

A ‘stop’ sign for a human driver, a ‘go ahead’ sign for a self-driving car. Photo by Luke van Zyl on Unsplash

It is actually not difficult to realize that a vandal who wants to crash a self-driving car (?!) can do it in a much easier way, than by carefully spraying the stop sign using deep learning. Here is a post that elaborates on this.

So, we saw that everyone’s favorite security examples are not really a threat.

Why are adversarial examples useful? Because they improve the knowledge that we have on deep learning. By being aware of its current limitations, researchers can make the neural networks even ‘smarter’ and maybe one day they will actually imitate human vision if that’s what we aim for.

Until that day, I am so eager to hear of a real threat that adversarial examples might pose.

(Bonus: Do we see reality as it is?)

Thanks to Florian Huber, Berend Weel, Felipe Zapata, Kim Holthaus, Johan Rheeder, Patrick Bos, and Zvezdan Protic for the useful suggestions.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Published in Netherlands eScience Center

We’re an independent foundation with 80+ passionate people working together in the Netherlands’ national centre for academic research software.

Written by Sonja Georgievska

“Insides out” of the AI bubble v. 2010s-’20s

No responses yet