Publication

Rethinking ImageNet Pre-training

International Conference on Computer Vision (ICCV)


Abstract

We report competitive results on object detection and instance segmentation on the COCO dataset using standard models trained from random initialization. The results are no worse than their ImageNet pre-training counterparts even when using the hyper-parameters of the baseline system (Mask R-CNN) that were optimized for fine-tuning pre-trained models, with the sole exception of increasing the number of training iterations so the randomly initialized models may converge. Training from random initialization is surprisingly robust; our results hold even when: (i) using only 10% of the training data, (ii) for deeper and wider models, and (iii) for multiple tasks and metrics. Experiments show that ImageNet pre-training speeds up convergence early in training, but does not necessarily provide regularization or improve final target task accuracy. To push the envelope we demonstrate 50.9 AP on COCO object detection without using any external data—a result on par with the top COCO 2017 competition results that used ImageNet pre-training. These observations challenge the conventional wisdom of ImageNet pre-training for dependent tasks and we expect these discoveries will encourage people to rethink the current de facto paradigm of ‘pre-training and fine-tuning’ in computer vision.

Related Publications

All Publications

LEEP: A New Measure to Evaluate Transferability of Learned Representations

Cuong V. Nguyen, Tal Hassner, Matthias Seeger, Cedric Archambeau

ICML - July 13, 2020

Fully Convolutional Mesh Autoencoder using Efficient Spatially Varying Kernels

Yi Zhou, Chenglei Wu, Zimo Li, Chen Cao, Yuting Ye, Jason Saragih, Hao Li, Yaser Sheikh

arXiv - July 1, 2020

Passthrough+: Real-time Stereoscopic View Synthesis for Mobile Mixed Reality

Gaurav Chaurasia, Arthur Nieuwoudt, Alexandru-Eugen Ichim, Richard Szeliski, Alexander Sorkine-Hornung

I3D - April 14, 2020

Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled Representation

Edoardo Remelli, Shangchen Han, Sina Honari, Pascal Fua, Robert Wang

CVPR - June 16, 2020

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy