Deep Learning Blindspots

Tools for Fooling the "Black Box"

Katharine Jarmul

Playlists: '34c3' videos starting here / audio / related events

In the past decade, machine learning researchers and theorists have created deep learning architectures which seem to learn complex topics with little intervention. Newer research in adversarial learning questions just how much “learning" these networks are doing. Several theories have arisen regarding neural network “blind spots” which can be exploited to fool the network. For example, by changing a series of pixels which are imperceptible to the human eye, you can render an image recognition model useless. This talk will review the current state of adversarial learning research and showcase some open-source tools to trick the "black box."



These files contain multiple languages.

This Talk was translated into multiple languages. The files available for download contain all languages as separate audio-tracks. Most desktop video players allow you to choose between them.

Please look for "audio tracks" in your desktop video player.