Armand Joulin

Research Scientist

I am a research scientist at Facebook Artificial Intelligence Research. I obtained my PhD in 2012 from the INRIA and the Ecole Normale Superieure. My advisors were Francis Bach and Jean Ponce. Before joining Facebook, I was a postdoctoral fellow at Stanford University, working with Daphne Koller and Fei-Fei Li.


Machine learning, computer vision, and natural language processing

Latest Publications

ICCV - October 10, 2021

LeViT: a Vision Transformer in ConvNet’s Clothing for Faster Inference

Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze

ICCV - October 10, 2021

Emerging Properties in Self-Supervised Vision Transformers

Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin

arXiv - June 16, 2021

XCiT: Cross-Covariance Image Transformers

Alaaeldin El-Nouby, Hugo Touvron, Mathilde Caron, Piotr Bojanowski, Matthijs Douze, Armand Joulin, Ivan Laptev, Natalia Neverova, Gabriel Synnaeve, Jakob Verbeek, Hervé Jégou

NeurIPS - December 2, 2020

Unsupervised Learning of Visual Features by Contrasting Cluster Assignments

Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, Armand Joulin

ICASSP - May 4, 2020

Libri-light: A benchmark for ASR with limited or no supervision

Jacob Kahn, Morgan Rivière, Weiyi Zheng, Evgeny Kharitonov, Qiantong Xu, Pierre-Emmanuel Mazaré, Julien Karadayi, Vitaliy Liptchinsky, Ronan Collobert, Christian Fuegen, Tatiana Likhomanenko, Gabriel Synnaeve, Armand Joulin, Abdelrahman Mohamed, Emmanuel Dupoux

ICLR - April 26, 2020

Reducing Transformer Depth on Demand with Structured Dropout

Angela Fan, Edouard Grave, Armand Joulin

ICLR - April 25, 2020

And the bit goes down: Revisiting the quantization of neural networks

Pierre Stock, Armand Joulin, Rémi Gribonval, Benjamin Graham, Hervé Jégou

ICCV - October 28, 2019

Unsupervised Pre-Training of Image Features on Non-Curated Data

Mathilde Caron, Piotr Bojanowski, Julien Mairal, Armand Joulin

ACL - July 28, 2019

Training Hybrid Language Models by Marginalizing over Segmentations

Edouard Grave, Sainbayar Sukhbaatar, Piotr Bojanowski, Armand Joulin

ACL - July 27, 2019

Adaptive Attention Span in Transformers

Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, Armand Joulin

arXiv - July 18, 2019

Why Build an Assistant in Minecraft?

Arthur Szlam, Jonathan Gray, Kavya Srinet, Yacine Jernite, Armand Joulin, Gabriel Synnaeve, Douwe Kiela, Haonan Yu, Zhuoyuan Chen, Siddharth Goyal, Demi Guo, Danielle Rothermel, Larry Zitnick, Jason Weston

NAACL - June 2, 2019

Cooperative Learning of Disjoint Syntax and Semantics

Serhii Havrylov, Germán Kruszewski, Armand Joulin

ICLR - May 17, 2019

Unsupervised Hyper-alignment for Multilingual Word Embeddings

Jean Alaux, Edouard Grave, Marco Cuturi, Armand Joulin

LREC - April 30, 2019

Learning Word Vectors for 157 Languages

Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, Tomas Mikolov

EMNLP 2018 - October 30, 2018

Loss in Translation: Learning Bilingual Word Mapping with a Retrieval Criterion

Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, Edouard Grave

ECCV 2018 - September 9, 2018

Deep Clustering for Unsupervised Learning of Visual Features

Mathilde Caron, Piotr Bojanowski, Armand Joulin, Matthijs Douze

ICML 2018 - July 10, 2018

Optimizing the Latent Space of Generative Networks

Piotr Bojanowski, Armand Joulin, David Lopez-Paz, Arthur Szlam

LREC 2018 - May 7, 2018

Advances in Pre-Training Distributed Word Representations

Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, Armand Joulin

AAAI 2018 - February 2, 2018

Efficient Large-Scale Multi-Modal Classification

Douwe Kiela, Edouard Grave, Armand Joulin, Tomas Mikolov

NIPS 2017 - December 4, 2017

Unbounded Cache Model for Online Language Modeling with Open Vocabulary

Edouard Grave, Moustapha Cisse, Armand Joulin

ICCV 2017 - October 22, 2017

Learning Visual N-Grams from Web Data

Ang Li, Allan Jabri, Armand Joulin, Laurens van der Maaten

ICML 2017 - August 6, 2017

Efficient Softmax Approximation for GPUs

Edouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, Hervé Jégou

ICML 2017 - August 6, 2017

Unsupervised Learning by Predicting Noise

Piotr Bojanowski, Armand Joulin

TACL - July 31, 2017

Enriching Word Vectors with Subword Information

Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov

ICLR - April 24, 2017

CommAI: Evaluating the First Steps Towards a Useful General AI

Marco Baroni, Armand Joulin, Allan Jabri, Germán Kruszewski, Angeliki Lazaridou, Klemen Simonic, Tomas Mikolov

ICLR 2017 - April 24, 2017

Improving Neural Language Models with a Continuous Cache

Edouard Grave, Armand Joulin, Nicolas Usunier

ICLR 2017 - April 24, 2017

Variable Computation in Recurrent Neural Networks

Yacine Jernite, Edouard Grave, Armand Joulin, Tomas Mikolov

EACL 2017 - April 3, 2017

Bag of Tricks for Efficient Text Classification

Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov

ECCV - October 10, 2016

Revisiting Visual Question Answering Baselines

Allan Jabri, Armand Joulin, Laurens van der Maaten

ECCV - October 8, 2016

Learning Visual Features from Large Weakly Supervised Data

Armand Joulin, Laurens van der Maaten, Allan Jabri, Nicolas Vasilache

ArXiv - November 25, 2015

A Roadmap Towards Machine Intelligence

Tomas Mikolov, Armand Joulin, Marco Baroni

ArXiv - November 23, 2015

Learning Simple Algorithms from Examples

Wojciech Zaremba, Tomas Mikolov, Armand Joulin, Rob Fergus

ArXiv - November 19, 2015

Alternative Structures for Character-Level RNNs

Piotr Bojanowski, Armand Joulin, Tomas Mikolov

ICLR - June 22, 2015

Learning Longer Memory in Recurrent Neural Networks

Tomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, Marc'Aurelio Ranzato

ArXiv - June 22, 2015

Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets

Armand Joulin, Tomas Mikolov