Publication

On the Modularity of Hypernetworks

Conference on Neural Information Processing Systems (NeurIPS)


Abstract

In the context of learning to map an input I to a function hI : X → R, two alternative methods are compared: (i) an embedding-based method, which learns a fixed function in which I is encoded as a conditioning signal e(I) and the learned function takes the form hI(x) = q(x, e(I)), and (ii) hypernetworks, in which the weights θI of the function hI(x) = g(x; θI) are given by a hypernetwork f as θI = f(I). In this paper, we define the property of modularity as the ability to effectively learn a different function for each input instance I. For this purpose, we adopt an expressivity perspective of this property and extend the theory of [6] and provide a lower bound on the complexity (number of trainable parameters) of neural networks as function approximators, by eliminating the requirements for the approximation method to be robust. Our results are then used to compare the complexities of q and g, showing that under certain conditions and when letting the functions e and f be as large as we wish, g can be smaller than q by orders of magnitude. This sheds light on the modularity of hypernetworks in comparison with the embedding-based method. Besides, we show that for a structured target function, the overall number of trainable parameters in a hypernetwork is smaller by orders of magnitude than the number of trainable parameters of a standard neural network and an embedding method.

Related Publications

All Publications

Innovative Technology at the Interface of Finance and Operations - March 31, 2021

Market Equilibrium Models in Large-Scale Internet Markets

Christian Kroer, Nicolas E. Stier-Moses

Human Interpretability Workshop at ICML - July 17, 2020

Investigating Effects of Saturation in Integrated Gradients

Vivek Miglani, Bilal Alsallakh, Narine Kokhlikyan, Orion Reblitz-Richardson

ICASSP - June 6, 2021

Multi-Channel Speech Enhancement Using Graph Neural Networks

Panagiotis Tzirakis, Anurag Kumar, Jacob Donley

JMLR - February 11, 2021

The Decoupled Extended Kalman Filter for Dynamic Exponential-Family Factorization Models

Carlos A. Gómez-Uribe, Brian Karrer

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy