Publication

Attention-Based WaveNet Autoencoder for Universal Voice Conversion

International Conference on Acoustics, Speech and Signal Processing (ICASSP)


Abstract

We present a method for converting any voice to a target voice. The method is based on a WaveNet autoencoder, with the addition of a novel attention component that supports the modification of timing between the input and the output samples. Training the attention is done in an unsupervised way, by teaching the neural network to recover the original timing from an artificially modified one. Adding a generic voice robot, which we convert to the target voice, we present a robust Text To Speech pipeline that is able to train without any transcript. Our experiments show that the proposed method is able to recover the timing of the speaker and that the proposed pipeline provides a competitive Text To Speech method.

Related Publications

All Publications

NeurIPS - December 6, 2020

High-Dimensional Contextual Policy Search with Unknown Context Rewards using Bayesian Optimization

Qing Feng, Benjamin Letham, Hongzi Mao, Eytan Bakshy

Innovative Technology at the Interface of Finance and Operations - March 31, 2021

Market Equilibrium Models in Large-Scale Internet Markets

Christian Kroer, Nicolas E. Stier-Moses

Human Interpretability Workshop at ICML - July 17, 2020

Investigating Effects of Saturation in Integrated Gradients

Vivek Miglani, Bilal Alsallakh, Narine Kokhlikyan, Orion Reblitz-Richardson

ICASSP - June 6, 2021

Multi-Channel Speech Enhancement Using Graph Neural Networks

Panagiotis Tzirakis, Anurag Kumar, Jacob Donley

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy