Adversarial training for blind-spot removal

Abstract

Embedding layers are non-differentiable in frameworks like Keras and Tensorflow because there is no mathematical computation involved. For gradient based attacks to work, we need some mechanism to allow gradients to flow from the loss, all the way to the input. Emulating embedding layer and inverting the embedding layer using separate neural networks are two possible ways to solve the problem. In this talk I discuss about the implementation of iterative adversarial training and the pros and cons of the approaches mentioned above. I also briefly compare the performance of the networks on different kinds of transferred adversaries.

Date
Feb 7, 2020 1:00 PM
Event
HiWi Talk
Location
IT Security Group
Mies-van-der-Rohe-Str. 15, Aachen, 52074
Rishi Sharma
Rishi Sharma
PhD Student at EPFL

Currently exploring research interests in Computer Science.

comments powered by Disqus