I am a research engineer at Facebook AI Research, currently building new CPU- and GPU-based systems for machine learning. At Facebook I previously worked on Apollo, a novel large-scale distributed database system built using strong and weak consensus protocols. Prior to Facebook, I worked for over a decade in the video game industry, primarily on real-time physics simulation, distributed systems and low-level optimization. I studied mathematics and computer science at Princeton University, with a BSE in computer science.
Interests
Distributed and parallel computation, signal processing and code optimization
Latest Publications
ARITH - June 1, 2020
Efficient, arbitrarily high precision hardware logarithmic arithmetic for linear algebra
Jeff Johnson
Systems for Machine Learning Workshop at NeurIPS 2018 - December 7, 2018
Rethinking floating point for deep learning
Jeff Johnson
June 22, 2015
Fast Convolutional Nets With fbfft: A GPU Performance Evaluation
Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, Yann LeCun
Latest News

November 8, 2018
Making floating point math highly efficient for AI hardware
External Blog
March 30, 2017