Complex-Valued Speech Generative Model

A. A. Nugraha, K. Sekiguchi, and K. Yoshii, "A deep generative model of speech complex spectrograms," in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., Brighton, UK, 2019, pp. 905--909.

Abstract

This paper proposes an approach to the joint modeling of the short-time Fourier transform magnitude and phase spectrograms with a deep generative model. We assume that the magnitude follows a Gaussian distribution and the phase follows a von Mises distribution. To improve the consistency of the phase values in the time-frequency domain, we also apply the von Mises distribution to the phase derivatives, i.e., the group delay and the instantaneous frequency. Based on these assumptions, we explore and compare several combinations of loss functions for training our models. Built upon the variational autoencoder framework, our model consists of three convolutional neural networks acting as an encoder, a magnitude decoder, and a phase decoder. In addition to the latent variables, we propose to also condition the phase estimation on the estimated magnitude. Evaluated for a time-domain speech reconstruction task, our models could generate speech with a high perceptual quality and a high intelligibility.


Reference

A. A. Nugraha, K. Sekiguchi, and K. Yoshii, “A deep generative model of speech complex spectrograms,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., Brighton, UK, 2019, pp. 905–909, doi: 10.1109/ICASSP.2019.8682797.


Audio Samples

 

Notes

 
  • All magnitude values are estimated by the different models.
  • For the model (M), the phase values are randomly sampled from a uniform distribution.
  • For the model (J*), the phase values are estimated by the model.
  • The Griffin-Lim algorithm (GLA) is done for 100 iterations.
 

Utterance ID: F05_440C020I_PED

 
Model Without GLA With GLA
(M)
(J1)
(J2)
(J3)
(J4)
(J5)
(J6)
(J7)
True
 

Utterance ID: M05_443C020S_BUS

 
Model Without GLA With GLA
(M)
(J1)
(J2)
(J3)
(J4)
(J5)
(J6)
(J7)
True