Research Area
Year Published

287 Results

April 20, 2020

Direction of Arrival Estimation in Highly Reverberant Environments Using Soft Time-Frequency Mask

IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA)

A recent approach to improving the robustness of sound localization in reverberant environments is based on pre-selection of time-frequency pixels that are dominated by direct sound. This approach is equivalent to applying a binary time-frequency mask prior to the localization stage. Although the binary mask approach was shown to be effective, it may not exploit the information available in the captured signal to its full extent. In an attempt to overcome this limitation, it is hereby proposed to employ a soft mask instead of the binary mask.

By: Vladimir Tourbabin, Jacob Donley, Boaz Rafaely, Ravish Mehra

April 9, 2020

Environment-aware reconfigurable noise suppression

International Conference on Acoustics, Speech, and Signal Processing (ICASSP)

The paper proposes an efficient, robust, and reconfigurable technique to suppress various types of noises for any sampling rate. The theoretical analyses, subjective and objective test results show that the proposed noise suppression (NS) solution significantly enhances the speech transmission index (STI), speech intelligibility (SI), signal-to-noise ratio (SNR), and subjective listening experience.

By: Jun Yang, Joshua Bingham

February 17, 2020

The Architectural Implications of Facebook’s DNN-based Personalized Recommendation

International Symposium on High Performance Computer Architecture (HPCA)

The widespread application of deep learning has changed the landscape of computation in data centers. In particular, personalized recommendation for content ranking is now largely accomplished using deep neural networks. However, despite their importance and the amount of compute cycles they consume, relatively little research attention has been devoted to recommendation systems. To facilitate research and advance the understanding of these workloads, this paper presents a set of real-world, production-scale DNNs for personalized recommendation coupled with relevant performance metrics for evaluation.

By: Udit Gupta, Carole-Jean Wu, Xiaodong Wang, Maxim Naumov, Brandon Reagen, David Brooks, Bradford Cottel, Kim Hazelwood, Mark Hempstead, Bill Jia, Hsien-Hsin S. Lee, Andrey Malevich, Dheevatsa Mudigere, Mikhail Smelyanskiy, Liang Xiong, Xuan Zhang

February 7, 2020

Generate, Segment and Refine: Towards Generic Manipulation Segmentation

Conference on Artificial Intelligence (AAAI)

Detecting manipulated images has become a significant emerging challenge. The advent of image sharing platforms and the easy availability of advanced photo editing software have resulted in a large quantities of manipulated images being shared on the internet. While the intent behind such manipulations varies widely, concerns on the spread of false news and misinformation is growing. Current state of the art methods for detecting these manipulated images suffers from the lack of training data due to the laborious labeling process. We address this problem in this paper.

By: Peng Zhou, Bor-Chun Chen, Xintong Han, Mahyar Najibi, Abhinav Shrivastava, Ser Nam Lim, Larry S. Davis

December 11, 2019

Hyper-Graph-Network Decoders for Block Codes

Neural Information Processing Systems (NeurIPS)

Neural decoders were shown to outperform classical message passing techniques for short BCH codes. In this work, we extend these results to much larger families of algebraic block codes, by performing message passing with graph neural networks.

By: Eliya Nachmani, Lior Wolf

December 11, 2019

Finding the Needle in the Haystack with Convolutions: on the benefits of architectural bias

Neural Information Processing Systems (NeurIPS)

Despite the phenomenal success of deep neural networks in a broad range of learning tasks, there is a lack of theory to understand the way they work. In particular, Convolutional Neural Networks (CNNs) are known to perform much better than Fully-Connected Networks (FCNs) on spatially structured data: the architectural structure of CNNs benefits from prior knowledge on the features of the data, for instance their translation invariance. The aim of this work is to understand this fact through the lens of dynamics in the loss landscape.

By: Stéphane d'Ascoli, Levent Sagun, Joan Bruna, Giulio Biroli

December 10, 2019

One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers

Neural Information Processing Systems (NeurIPS)

The success of lottery ticket initializations [7] suggests that small, sparsified networks can be trained so long as the network is initialized appropriately. Unfortunately, finding these “winning ticket” initializations is computationally expensive. One potential solution is to reuse the same winning tickets across a variety of datasets and optimizers. However, the generality of winning ticket initializations remains unclear. Here, we attempt to answer this question by generating winning tickets for one training configuration (optimizer and dataset) and evaluating their performance on another configuration.

By: Ari Morcos, Haonan Yu, Michela Paganini, Yuandong Tian

December 10, 2019

PHYRE: A New Benchmark for Physical Reasoning

Neural Information Processing Systems (NeurIPS)

Understanding and reasoning about physics is an important ability of intelligent agents. We develop the PHYRE benchmark for physical reasoning that contains a set of simple classical mechanics puzzles in a 2D physical environment.

By: Anton Bakhtin, Laurens van der Maaten, Justin Johnson, Laura Gustafson, Ross Girshick

December 10, 2019

Limiting Extrapolation in Linear Approximate Value Iteration

Neural Information Processing Systems (NeurIPS)

We study linear approximate value iteration (LAVI) with a generative model. While linear models may accurately represent the optimal value function using a few parameters, several empirical and theoretical studies show the combination of least-squares projection with the Bellman operator may be expansive, thus leading LAVI to amplify errors over iterations and eventually diverge. We introduce an algorithm that approximates value functions by combining Q-values estimated at a set of anchor states.

By: Andrea Zanette, Alessandro Lazaric, Mykel J. Kochenderfer, Emma Brunskill

December 9, 2019

ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks

Neural Information Processing Systems (NeurIPS)

We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, processing both visual and textual inputs in separate streams that interact through co-attentional transformer layers.

By: Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee