Publication

TT-Rec: Tensor Train Compression For Deep Learning Recommendation Model Embeddings

Conference on Machine Learning and Systems (MLSys)


Abstract

The memory capacity of embedding tables in deep learning recommendation models (DLRMs) is increasing dramatically from tens of GBs to TBs across the industry. Given the fast growth in DLRMs, novel solutions are urgently needed in order to enable DLRM innovations. At the same time, this must be done in a fast and efficient way without having to exponentially increase infrastructure capacity demands. In this paper, we demonstrate the promising potential of Tensor Train decomposition for DLRMs (TT-Rec), an important yet under-investigated context. We design and implement optimized kernels (TT-EmbeddingBag) to evaluate the proposed TT-Rec design. TT-EmbeddingBag is 3× faster than the SOTA TT implementation. The performance of TT-Rec is further optimized with the batched matrix multiplication and caching strategies for embedding vector lookup operations. In addition, we present mathematically and empirically the effect of weight initialization distribution on DLRM accuracy and propose to initialize the tensor cores of TT-Rec following the sampled Gaussian distribution. We evaluate TT-Rec across three important design space dimensions—memory capacity, accuracy, and timing performance—by training MLPerf-DLRM with Criteo’s Kaggle and Terabyte data sets. TT-Rec compresses the model size by 4× to 221× for Kaggle, with 0.03% to 0.3% loss of accuracy correspondingly. For Terabyte, our approach achieves 112× model size reduction which comes with no accuracy loss nor training time overhead as compared to the uncompressed baseline. Our code is available on Github at: https://github.com/facebookresearch/FBTT-Embedding.

Related Publications

All Publications

SIGGRAPH - August 9, 2021

ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation

He Zhang, Yuting Ye, Takaaki Shiratori, Taku Komura

SIGGRAPH - August 9, 2021

Control Strategies for Physically Simulated Characters Performing Two-player Competitive Sports

Jungdam Won, Deepak Gopinath, Jessica Hodgins

ICML - July 18, 2021

Align, then memorise: the dynamics of learning with feedback alignment

Maria Refinetti, Stéphane d'Ascoli, Ruben Ohana, Sebastian Goldt

CVPR - June 18, 2021

Improving Panoptic Segmentation at All Scales

Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy