Publication

Houdini: Fooling Deep Structured Visual and Speech Recognition Models with Adversarial Examples

Neural Information Processing Systems (NIPS)


Abstract

Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines. So far, most existing methods only work for classification and are not designed to alter the true performance measure of the problem at hand. We introduce a novel flexible approach named Houdini for generating adversarial examples specifically tailored for the final performance measure of the task considered, be it combinatorial and non-decomposable. We successfully apply Houdini to a range of applications such as speech recognition, pose estimation and semantic segmentation. In all cases, the attacks based on Houdini achieve higher success rate than those based on the traditional surrogates used to train the models while using a less perceptible adversarial perturbation.

Related Publications

All Publications

Open Source Evolutionary Structured Optimization

Jeremy Rapin, Pauline Bennet, Emmanuel Centeno, Daniel Haziza, Antoine Moreau, Olivier Teytaud

Evolutionary Computation Software Systems Workshop at ​GECCO - July 9, 2020

Learning Generalizable Locomotion Skills with Hierarchical Reinforcement Learning

Tianyu Li, Nathan Lambert, Roberto Calandra, Franziska Meier, Akshara Rai

ICRA - June 1, 2020

Large Scale Audiovisual Learning of Sounds with Weakly Labeled Data

Haytham M. Fayek, Anurag Kumar

IJCAI - July 11, 2020

Efficient Bimanual Manipulation Using Learned Task Schemas

No Authors Listed

ICRA - May 31, 2020

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy