Adversarial training for blind-spot removal

Feb 7, 2020ยท
Rishi Sharma
Rishi Sharma
ยท 0 min read
Abstract
Embedding layers are non-differentiable in frameworks like Keras and Tensorflow because there is no mathematical computation involved. For gradient based attacks to work, we need some mechanism to allow gradients to flow from the loss, all the way to the input. Emulating embedding layer and inverting the embedding layer using separate neural networks are two possible ways to solve the problem. In this talk I discuss about the implementation of iterative adversarial training and the pros and cons of the approaches mentioned above. I also briefly compare the performance of the networks on different kinds of transferred adversaries.
Event
HiWi Talk
Location

IT Security Group

Mies-van-der-Rohe-Str. 15, Aachen, QC 52074