53
Unsupervised speech representation learning using WaveNet autoencoders https://arxiv.org/abs/1901.08810 Jan Chorowski University of Wrocław 06.06.2019

Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

  • Upload
    others

  • View
    13

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Unsupervised speech representation learning using WaveNet

autoencoders https://arxiv.org/abs/1901.08810

Jan Chorowski University of Wrocław

06.06.2019

Page 2: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Deep Model = Hierarchy of Concepts

Cat Dog … Moon Banana

M. Zieler, “Visualizing and Understanding Convolutional Networks”

Page 3: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Deep Learning history: 2006

2006: Stacked RBMs

Hinton, Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks”

Page 4: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Deep Learning history: 2012

2012: Alexnet SOTA on Imagenet

Fully supervised training

Page 5: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Deep Learning Recipe

1. Get a massive, labeled dataset 𝐷 = {(𝑥, 𝑦)}: – Comp. vision: Imagenet, 1M images

– Machine translation: EuroParlamanet data, CommonCrawl, several million sent. pairs

– Speech recognition: 1000h (LibriSpeech), 12000h (Google Voice Search)

– Question answering: SQuAD, 150k questions with human answers

– …

2. Train model to maximize log 𝑝(𝑦|𝑥)

Page 6: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Value of Labeled Data

• Labeled data is crucial for deep learning

• But labels carry little information:

– Example: An ImageNet model has 30M weights, but ImageNet is about 1M images from 1000 classes Labels: 1M * 10bit = 10Mbits Raw data: (128 x 128 images): ca 500 Gbits!

Page 7: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Value of Unlabeled Data

“The brain has about 1014 synapses and we only live for about 109 seconds. So we have a lot more parameters than data. This motivates the idea that we must do a lot of unsupervised learning since the perceptual input (including proprioception) is the only place we can get 105 dimensions of constraint per second.”

Geoff Hinton

https://www.reddit.com/r/MachineLearning/comments/2lmo0l/ama_geoffrey_hinton/

Page 8: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Unsupervised learning recipe

1. Get a massive labeled dataset 𝐷 = 𝑥 Easy, unlabeled data is nearly free

2. Train model to…??? What is the task? What is the loss function?

Page 9: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Unsupervised learning by modeling data distribution

Train the model to minimize − log 𝑝(𝑥) E.g. in 2D:

• Let 𝐷 = {𝑥: 𝑥 ∈ ℝ2} • Each point is an 2-dimensional

vector • We can draw a point-cloud • And fit some known

distribution, e.g. a Gaussian

Page 10: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Learning high dimensional distributions is hard

• Assume we work with small (32x32) images

• Each data point is a real vector of size 32 × 32 × 3

• Data occupies only a tiny fraction of ℝ32×32×3

• Difficult to learn!

Page 11: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Autoregressive Models

Decompose probability of data points in 𝑅𝑛 into 𝑛 conditional univariate probabilities:

𝑝 𝑥 = 𝑝 𝑥1, 𝑥2, … , 𝑥𝑛 = 𝑝 𝑥1 𝑝 𝑥2 𝑥1 …𝑝 𝑥𝑛 𝑥1, 𝑥2, … , 𝑥𝑛−1

= 𝑝(𝑥𝑖|𝑥<𝑖)

𝑖

Page 12: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Autoregressive Example: Language modeling

Let 𝑥 be a sequence of word ids.

𝑝 𝑥 = 𝑝 𝑥1, 𝑥2, … , 𝑥𝑛 = 𝑝(𝑥𝑖|𝑥<𝑖)

𝑖

≈ 𝑝 𝑥𝑖 𝑥𝑖−𝑘 , 𝑥𝑖−𝑘+1, … , 𝑥𝑖−1𝑖

p(It’s a nice day) = p(It) * p(‘s|it) * p(a|’s)… • Classical n-gram models: cond. probs. estimated using

counting • Neural models: cond. probs. estimated using neural nets

Page 13: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

WaveNet: Autoregressive modeling of speech

https://arxiv.org/abs/1609.03499

Treat speech as a sequence of samples!

Predict each sample base on previous ones.

Page 14: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

PixelRNN: A “language model for images”

Pixels generated left-to-right, top-to-bottom.

Cond. probabilities estimated using recurrent or convolutional neural networks.

van den Oord, A., et al. “Pixel Recurrent Neural Networks.” ICML (2016).

Page 15: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

PixelCNN samples

Salimans et al, “A PixelCNN Implementation with Discretized Logistic Mixture Likelihood and Other Modifications”

Page 16: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Autoregressive Models Summary

The good:

- Simple to define (pick an ordering).

- Often yield SOTA log-likelihood.

The bad:

- Training and generation require 𝑂 𝑛 ops.

- No compact intermediate data representation – not obvious how to use for downstream tasks.

Page 17: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Latent Variable Models Intuition: to generate something complicated, do:

1. Sample something simple 𝑧~𝒩(0,1)

2. Transform it 𝑥 =𝑧

10+𝑧

𝑧

Page 18: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Variational autoencoder: A neural latent variable model

Assume a 2 stage data generation process:

𝑧~𝒩 0,1 prior 𝑝(𝑧) assumed to be simple

𝑥~𝑝 𝑥 𝑧 complicated transformation implemented with a neural network

How to train this model?

log 𝑝(𝑥) = log 𝑝 𝑥 𝑧 𝑝(𝑧)

𝑧

This is often intractable!

Page 19: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

ELBO: A lower bound on log 𝑝(𝑥)

Let 𝑞(𝑧|𝑥) be any distribution. We can show that

log 𝑝 𝑥 =

= 𝐾𝐿 𝑞 𝑧 𝑥 ∥ 𝑝 𝑧 𝑥 + 𝔼𝑧~𝑞 𝑧 𝑥 log𝑝 𝑧|𝑥

𝑞 𝑧 𝑥𝑝 𝑥

≥ 𝔼𝑧~𝑞 𝑧 𝑥 log𝑝 𝑧|𝑥

𝑞 𝑧 𝑥𝑝 𝑥

= 𝔼𝑧~𝑞 𝑧 𝑥 log 𝑝 𝑥 𝑧 − 𝐾𝐿 𝑞 𝑧 𝑥 ∥ 𝑝 𝑧

The bound is tight for 𝑝 𝑧 𝑥 = 𝑞 𝑧 𝑥 .

Page 20: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

ELBO interpretation ELBO, or evidence lower bound:

log 𝑝 𝑥 ≥ 𝔼𝑧~𝑞 𝑧 𝑥 log 𝑝 𝑥 𝑧 − 𝐾𝐿 𝑞 𝑧 𝑥 ∥ 𝑝 𝑧

where:

𝔼𝑧~𝑞 𝑧 𝑥 log 𝑝 𝑥 𝑧 reconstruction quality: how many nats we need to reconstruct 𝑥, when someone gives us 𝑞 𝑧 𝑥

𝐾𝐿 𝑞 𝑧 𝑥 ∥ 𝑝 𝑧 code transmission cost: how many nats we transmit about 𝑥 in 𝑞(𝑧|𝑥) rather than 𝑝 𝑧 Interpretation: do well at reconstructing 𝑥, limiting the amount of information about 𝑥 encoded in 𝑧.

Page 21: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

The Variational Autoencoder

𝑥

𝑞(𝑧|𝑥) q p

𝑝(𝑥|𝑧)

An input 𝑥 is put through the 𝑞 network to obtain a distribution over latent code 𝑧, 𝑞(𝑧|𝑥).

Samples 𝑧1, … , 𝑧𝑘 are drawn from 𝑞(𝑧|𝑥). They 𝑘 reconstructions 𝑝(𝑥|𝑧𝑘) are computed using the network 𝑝.

𝔼𝑧~𝑞 𝑧 𝑥 log 𝑝 𝑥 𝑧

𝑝(𝑧) 𝐾𝐿 𝑞 𝑧 𝑥 ∥ 𝑝 𝑧

Page 22: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

VAE is an Information Bottleneck

Each sample is represented as a Gaussian

This discards information (latent representation has low precision)

Page 23: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

VQVAE – deterministic quantization Limit precision of the encoding by quantizing (round each vector to a nearest prototype).

Output can be treated:

- As a sequence of discrete prototype ids (tokens)

- As a distributed representation (the prototypes themselves)

Train using the straight-through estimator,

with auxiliary losses:

Page 24: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

VAEs and sequential data To encode a long sequence, we apply the VAE to chunks:

But neighboring chunks are similar!

We are encoding the same information in many 𝑧s!

We are wasting capacity!

𝑧 𝑧 𝑧 𝑧 𝑧

Page 25: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

WaveNet + VAE

The WaveNet uses information from:

1. The past recording

2. The latent vectors 𝑧

3. Other conditioning, e.g. about speaker

The encoder transmits in 𝑧s only the information that is missing from the past recording . The whole system is a very low bitrate codec (roughly 0.7kbits/sec, the waveform is 16k Hz* 8bit=128kbit/sec)

A WaveNet reconstructs the waveform using the information from the past

𝑧 𝑧 𝑧 𝑧 𝑧 Latent representations are extracted at regular inervals.

van den Oord et al. Neural discrete representation learning

Page 26: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

VAE + autoregressive models: latent collapse danger

• Purely Autoregressive models: SOTA log-likelihoods

• Conditioning on latents: information passed through bottleneck lower reconstruction x-entropy

• In standard VAE model actively tries to - reduce information in the latents - maxmally use autoregressive information => Collapse: latents are not used!

• Solution: stop optimizing KL term (free bits), make it a hyperparam (VQVAE)

Page 27: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Model description

WaveNet decoder conditioned on:

- latents extracted at 24Hz-50Hz

- speaker

3 bottleneck evaluated: - Dimensionality reduction, max 32 bits/dim

- VAE, 𝐾𝐿 𝑞 𝑧 𝑥 ∥ 𝒩 0,1 nats (bits)

- VQVAE with 𝐾 protos: log2𝐾 bits

Input:

Waveforms, Mel Filterbanks, MFCCs

Hope: speaker separated form content. Proof: https://arxiv.org/abs/1805.09458

𝑧 𝑧 𝑧 𝑧 𝑧

Or

spkr spkr spkr spkr spkr

Page 28: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Representation probing points

We have inserted probing classifiers at 4 points in the network:

𝑧 𝑧 𝑧 𝑧 𝑧

𝑝𝑒𝑛𝑐: high dimensional representation coming out of the encoder

𝑝𝑝𝑟𝑜𝑗: low dimensional

representation input to the bottleneck layer

𝑝𝑏𝑛: the latent codes

𝑝𝑐𝑜𝑛𝑑: several 𝑧 codes mixed together using a convolution. The wavenet uses it for conditioning

Page 29: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Experimental Questions

• What information is captured in the latent codes/probing points?

• What is the role of the bottleneck layer?

• Can we regularize the latent representation?

• How to promote a segmentation?

• How good is the representation on downstream tasks?

• What design choices affect it?

Chorowski et al. Unsupervised speech representation learning using WaveNet autoencoders

Page 30: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

VQVAE Latent representation

Page 31: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

What information is captured in the latent codes?

For each probing point, we have trained predictors for:

- Framewise phoneme prediction

- Speaker prediction

- Gender predicion

- Mel Filterbank reconstruction

Page 32: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Results

Page 33: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Phonemes vs Gender tradeoff

Page 34: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

How to regularize the latent codes?

We want the codes to capture phonetic information.

Phones vary in duration – from about 30ms to 1s (long silences).

Thus we need to extract the latent codes frequently enough to capture the short phones, but when the phone doesn’t change, the latents should be stable too.

This is similar to slow features analysis.

Page 35: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Problem with enforcing slowness

Enforcing slow features (small changes to the latents), has a trivial optimum: constant latents.

Then WaveNet can just disregard the encoder, and latent space collapses.

Page 36: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Randomized time jitter Rather than putting a penalty on changes of the latent 𝑧 vectors, add time jitter to them. This forces the model to have a more stable representation over time.

𝑧 𝑧 𝑧 𝑧 𝑧 ? ? ? ? ? ?

Page 37: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Randomized time jitter results

Page 38: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

How to learn a segmentation?

The representation should be constant within a phoneme, then change abruptly

Enforcing slowness leads to collapse, jitter prevents the model from using pairs of tokens as codepoints

Idea: allow the model to infrequently emit a non-trivial representation

Page 39: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Non-max suppression – choosing where to emit

Latents computed at 25Hz, but allow only ¼ nonzero

Page 40: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Non-max suppression – choosing where to emit

Token 13 is near emissions of „S” and some „Z”

Page 41: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Non-max suppression – choosing where to emit

Token 17 is near emissions of some „L”

Page 42: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Performance on ZeroSpeech unit discovery

SOTA results in unsupervised phoneme discrimination Fr and EN ZeroSpeech challenge. Mandarin shows limitation of the method: - Too little training data (only2.4h unsup. speech) - Tonal information is discarded.

Page 43: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

English: VQVAE bottleneck adds speaker invariance

English Within spkr. Across spkr.

The quantization discards speaker info, improving across-speaker results MFCCs slightly better than FBanks

Page 44: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Mandarin: VQVAE bottleneck discards phone information

Mandarin Within spkr. Across spkr.

The quantization discards too much (tone insensitivity?) MFCCs worse than FBanks

Page 45: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

What impacts the representation?

Implicit time constant of the model:

• Input field of view of the encoder – optimum close to 0.3s

• WaveNet field of view - needs at minimum 10ms

Page 46: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Failed attempts

• I found no benefits from building a hierarchical representation (extract latents at differents timescales), even when the slower latents had no bottleneck

• Filterbank reconstruction works worse than waveform

– Too easy for the autoregressive model?

– To little detail?

Page 47: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

The future

We will explore similar ideas during JSALT2019 topic “Distant supervision for representation learning”. The workshop will: - Work on speech and handwriting - Explore ways of integrating metadata and unlabeled

data to control latent representations - Focus on downstream supervised OCR and ASR tasks

under low data conditions Some approaches to try: - Contrastive predicitve coding - Masked reconstruction

Page 48: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

The future: CPC

• Contrastive coding learns representations that can tell a frame from other ones

Oord et al. „Representation Learning with Contrastive Predictive Coding” Schneider et al. „wav2vec: Unsupervised Pre-training for Speech Recognition”

Page 49: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

The future: masked reconstruction

• BERT is a recent, SOTA model for sentence representation learning

• Mask the inputs:

Page 50: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Thank you!

• Questions?

Page 51: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

Backup

Page 52: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

ELBO Derivation pt. 1

Page 53: Unsupervised speech representation learning using …...WaveNet + VAE The WaveNet uses information from: 1. The past recording 2. The latent vectors 3. Other conditioning, e.g. about

ELBO derivation pt. 2