Publication

Learn-to-Share: A Hardware-friendly Transfer Learning Framework Exploiting Computation and Parameter Sharing

International Conference on Machine Learning (ICML)


Abstract

Task-specific fine-tuning on pre-trained transformers has achieved performance breakthroughs in multiple NLP tasks. Yet, as both computation and parameter size grows linearly with the number of sub-tasks, such methods are increasingly difficult to adopt in the real world due to unrealistic memory and computation overhead on computing devices. Previous works on fine-tuning focus on reducing the growing parameter size to save storage cost by parameter sharing. However, compared to storage, the constraint of computation is a more critical issue with the fine-tuning models in modern computing environments; prior works fall short on computation reduction.

To enable efficient fine-tuning, we propose LeTS, a framework that leverages both computation and parameter sharing across multiple tasks. LeTS consists of two principles. First, LeTS decouples the computation dependency in traditional fine-tuning model by proposing a novel neural architecture to reuse the intermediate results computed from the pre-trained model and the input. Furthermore, we leverage differentiable neural architecture search to determine task-specific computation sharing scheme. Second, by treating the final weight parameters as a weight difference added to the pre-trained weight, we propose a novel early stage pruning approach to generate a mask at the beginning of fine-tuning. By combining these two principles, LeTS further reduces the computation demand by exploiting the sparsity feature of weight difference. Extensive experiments show that with 1.4% of extra parameters per task, LeTS reduces the computation by 49.5% on GLUE benchmarks with only 0.2% accuracy loss compared to the full fine-tuning method.

Related Publications

All Publications

Federated Learning for User Privacy and Data Confidentiality Workshop At ICML - July 24, 2021

Federated Learning with Buffered Asynchronous Aggregation

John Nguyen, Kshitiz Malik, Hongyuan Zhan, Ashkan Yousefpour, Michael Rabbat, Mani Malek, Dzmitry Huba

UAI - July 28, 2021

A Nonmyopic Approach to Cost-Constrained Bayesian Optimization

Eric Hans Lee, David Eriksson, Valerio Perrone, Matthias Seeger

ACM MM - October 20, 2021

EVRNet: Efficient Video Restoration on Edge Devices

Sachin Mehta, Amit Kumar, Fitsum Reda, Varun Nasery, Vikram Mulukutla, Rakesh Ranjan, Vikas Chandra

ICCV - October 11, 2021

Egocentric Pose Estimation from Human Vision Span

Hao Jiang, Vamsi Krishna Ithapu

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy