107
New and emerging applications of speech synthesis Junichi Yamagishi The Centre for Speech Technology Research University of Edinburgh www.cstr.ed.ac.uk Speech synthesis seminar series 9th February 2011 Centre for Speech Technology Research R T S C 1

Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

New and emerging applications of speech synthesis

Junichi YamagishiThe Centre for Speech Technology Research

University of Edinburgh

www.cstr.ed.ac.uk

Speech synthesis seminar series 9th February 2011

Centre for Speech Technology Research RTSC

1

Page 2: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

The approach taken in this talk

• Instead of• focussing on techniques which improve the quality of synthetic speech

• we will structure the tutorial around• scientifically and commercially novel and interesting technologies and

applications

• and in doing this, we will• describe the underlying techniques• try to place speech synthesis on a more scientific basis

• Of course, quality is not a solved problem - it’s just not the focus of this tutorial

2

Page 3: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Current, new and emerging applications

• Currently, synthesis is an “optional extra” allowing text to be read out loud• Accessibility, screenreader• Telephone services, dialogue systems• Basic (and rather boring) e-Book reader• In-car navigation• Basic voice communication aids for people with disabilities

• But none of these seem to be ‘killer applications’

• New and emerging applications: focus on the voice not the text• Voice cloning• Voice reconstruction for people with disordered speech• Personalised speech-to-speech translation • Noise-adaptive speech synthesis

3

Page 4: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Text-to-Speech (TTS)

• Definition: a text-to-speech system must be • able to read any text• intelligible • natural sounding

• Methods available • Model-based

• simplified model of speech production (e.g., Klatt vocal tract model)• typically driven by rules

• Concatenative • cut & paste recorded examples of real speech - unit selection

• Vocoder + statistical parametric model• functional model of speech signal• driven by statistical parametric model (e.g., HMM)

4

Page 5: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

History of TTS

Model based

(parametric)

Unit-based

(concatenative)

1980s-90

Diphone synthesis

1990s-2000s

Unit selection

1970s: Formant synthesis

2000s

HMM based synthesis

Improved quality

Improved control(Also: articulatory synthesis )

5

Page 6: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Scientific basis

• Current TTS uses very limited knowledge of human• speech production

• explicit (physical) models of production still inadequate• but there are other ways to make use of production knowledge• deeper models could enable better generalisation

• speech perception• essential, but almost entirely absent from current systems• after all, the synthetic speech is intended to be heard

• Interplay with related fields: speech perception & production, animation, ... • articulatory-controllable statistical speech synthesis • TTS adaptive to listener & environment without requiring new data

6

Page 7: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Contents

• Background• a brief introduction to HMM-based speech synthesis• basic principles of adaptation

• Applications1. Voice cloning2. Voice reconstruction 3. Personalised speech-to-speech translation 4. Articulatory-controllable speech synthesis

7

Page 8: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Contents

• Background• a brief introduction to HMM-based speech synthesis• basic principles of adaptation

• Applications1. Voice cloning2. Voice reconstruction 3. Personalised speech-to-speech translation 4. Articulatory-controllable speech synthesis

8

Page 9: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

A brief introduction to HMM-based speech synthesis

9

Page 10: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Text-to-speech synthesis

• Text to speech• input: text• output: a waveform that can be listened to

• Two main components• front end: analyses text and converts to linguistic specification• waveform generation: converts linguistic specification to speech

10

Page 11: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

From words to linguistic specification

sil^dh-ax+k=ae, "phrase initial", "unstressed syllable", ...

sil dh ax k ae t s ae t sil

DET NN VB

phrase finalphrase initialpitch accent

"the cat sat"

((the cat) sat)

11

Page 12: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

From linguistic specification to speech

#

#

#

#

#

dh

dh

dh

dh

ax

ax

ax

ax

ax

k

k

k

k

ae

ae

ae

ae

ae

ae

t

t

t

t

t

s

s

s

s

ae

ae

ae

ae

ae

t

t

t

#

#

#

#

#

# #dh ax k ae t s ae t

Concatenate pre-recorded speech

Drive a vocoder with a statistical model

or

vocoder

12

Page 13: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Flattening the linguistic specification: attaching all features to the segment level

Pitch accentBoundary tone

SyllableSyllableSyllableSyllable

WordWordWord

PP P P P PP PPP PPPP PP

Phrase Phrase

PP P P P PP PPP PPPP PP

13

Page 14: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Flattened specification:context-dependent phones

pau^pau-pau+ao=th@x_x/A:0_0_0/B:x-x-x@x-x&x-x#x-x$.....pau^pau-ao+th=er@1_2/A:0_0_0/B:1–1–2@1–2&1–7#1–4$.....pau^ao-th+er=ah@2_1/A:0_0_0/B:1–1–2@1–2&1–7#1–4$.....ao^th-er+ah=v@1_1/A:1_1_2/B:0–0–1@2–1&2–6#1–4$.....th^er-ah+v=dh@1_2/A:0_0_1/B:1–0–2@1–1&3–5#1–3$.....er^ah-v+dh=ax@2_1/A:0_0_1/B:1–0–2@1–1&3–5#1–3$.....ah^v-dh+ax=d@1_2/A:1_0_2/B:0–0–2@1–1&4–4#2–3$.....v^dh-ax+d=ey@2_1/A:1_0_2/B:0–0–2@1–1&4–4#2–3$.....

“Author of the ...”

14

Page 15: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Too many model parameters

• The model is highly context dependent • as a consequence, many (in fact, most) model parameters will not have any

corresponding training data

• The standard solution to this comes from ASR• share parameters across similar contexts• but how to determine which contexts are similar?

• Decision-tree based context clustering• Cluster together groups of parameters which have the same values for some

subset of the linguistic specification (i.e., their contexts are similar)• Allows us to create parameters for contexts never seen in the training data• Automatically scales the model complexity (number of free parameters) to suit

the available data

15

Page 16: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Decision tree context clustering

vowel

L-unvoicedplosive

R-silencenasal

Nvoicedplosive

semivowel L-silence L-silence 1to13_a0 L-voicedplosive low-tail

R-sil

L-semivowel i

yesno

yesno yesno

yesno yesno yesno yesno

16

Page 17: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

HMM-based speech synthesis

• Front end processes text and produces a linguistic specification• Linguistic specification is flattened: a sequence of context-dependent phonemes

• HMMs are used to generate sequences of speech (in a parameterised form which we call ‘speech features’)

• From the parameterised form, we can generate a waveform using a vocoder• The parameterised form contains sufficient information to generate speech:

• spectral envelope• fundamental frequency (F0) - sometimes called ‘pitch’• aperiodic (noise-like) components (e.g. for sounds like ‘sh’ and ‘f’)

17

Page 18: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

HMMs are generative models

18

Page 19: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

time

sp

eech p

ara

mete

r

Trajectory generation

19

Page 20: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Comparison with ASR

• Differences from automatic speech recognition include

• Synthesis uses a much richer model set, with a lot more context• For speech recognition: triphone models• For speech synthesis: “full context” models

• “Full context” = both phonetic and prosodic factors

• Observation vector for HMMs contains the necessary parameters to generate speech using a vocoder, such as spectral envelope, F0 & multi-band aperiodic energy amplitudes

20

Page 21: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Comparison with unit-selection synthesis

• System construction• Training and optimising an HMM-based speech synthesiser is• almost entirely automatic• based on objective measures (maximum likelihood criterion, minimum

cepstral distortion, etc)• Optimising a unit selection system (e.g., choosing target cost weights) is

usually• done by trial and error• based on subjective measures (listening).

• Data• Unit selection needs 5-10,000+ sentences of data from one speaker• Training a speaker-dependent HMM needs 1000+ sentences• Adapting a speaker-independent HMM needs 1-100 sentences

21

Page 22: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Latest Samples

English Sample Romanian SampleSpanish Sample

• 48kHz sampling frequency, • STRAIGHT bark cepstrum, pitch in mel, Bark-critical-band limited aperiodicity

measures, Mixed excitation plus group delay phase manipulation, • Context-dependent state-tied MSD-HSMM with semi-tied covariance

22

Page 23: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Text-to-speech: multidisciplinary research

Signal processing:

Spectral analysis (STRAIGHT),

Spectral modelling, F0 extraction,

Source/filter separation, Glottal modelling,

Vocoder etc

Statistical modelling:Acoustic modelling

ClusteringAdaptation

etc

Natural language processing:

Lexicon, Chunking,

POS tagging, Phrase break prediction,

Syntax analysis etc

23

Page 24: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Contents

• Background• a brief introduction to HMM-based speech synthesis• basic principles of adaptation

• Applications1. Voice cloning2. Voice reconstruction 3. Personalised speech-to-speech translation 4. Articulatory-controllable speech synthesis

24

Page 25: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

The basic principles of adaptation

Most common use: speaker adaptation

25

Page 26: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Speaker adaptation

• One of the most important recent developments in speech recognition• A linear transform is applied to every HMM parameter (Gaussian mean and

variance) in order to adapt the model to new data• Can be used to create new voices for speech synthesis:

• Train HMM on lots of data from multiple speakers• Adapt this HMM using a small amount of data from the target speaker

• 100 sentences is usually enough

• This is a very exciting development in speech synthesis • Provided data are available, any other acoustic difference can be adapted

• speaking style/emotion• dialect/accent• speaker identity across languages• and so on...

26

Page 27: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Synthesise

Test

sentence

labels

Synthetic

speechAdaptbdl bdl

speech labels

Average

voice model

Recognise

awb

clb

rms

...Train

awb

clb

rms

...

Transforms

27

Page 28: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

ML-based linear regression

Acoustic Space

Average Voice Model

1 2

3

Regression Class Tree

Threshold

Target SpeakerModel

TransformationFunction

TransformationFunction

AMCC

MLLR

CMLLR

Mean Vector of Gaussian pdf Covariance Matrix of Gaussian pdf 28

Page 29: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

MAP-based linear regression

Acoustic Space

Target SpeakerModel

Regression Class Tree

TransformationFunction

TransformationFunction

SMAP

SMAPLR

CSMAPLR

Average Voice Model

Mean Vector of Gaussian pdf Covariance Matrix of Gaussian pdf

Prior Propagation

29

Page 30: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Comparative study of linear transforms

RMSE

of lo

gF0 [

cent]

260 262 264 266 268 270 272 274

258 0 20 40 60 80 100

Number of Adaptation Data Number of Adaptation Data 0 20 40 60 80 100

CMLLR

SMAPLR

MLLR

4.8 5

5.2 5.4 5.6 5.8

6

Mel-C

epstr

um D

istan

ce[dB

]

MLLR SMAPLR

CMLLR

ML Criterion Structual MAP Criterion

CSMAPLR

CSMAPLR

MLLR

CSMAPLRSMAPLRCMLLR

0 20 40 60 80 100

30

Page 31: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

MLLR followed by MAP

Target Speaker’s Model

Average Voice Model

Linear RegressionMLLR / SMAPLRCMLLR / CSMAPLR

MAP

MAP

MAP 4.8 4.9

5 5.1 5.2 5.3 5.4 5.5 5.6

0 20 40 60 80 100Number of Adaptation Sentences

Mel-c

epstr

al Di

stanc

e[dB]

SMAPLR

SMAPLR+ MAP 256 258 260 262 264 266 268 270

RMSE

of lo

gF0 [

cent]

0 20 40 60 80 100Number of Adaptation Sentences

SMAPLR+ MAP

SMAPLR

SD(450 sentences)4.79 [dB]

SD(450 sentences)270 [cent]SMAP:BIAS SMAP:BIAS

0 20 40 60 80 100Score(%)

SMAPLR+MAP

SMAP:BIAS

SMAPLR

31

Page 32: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

The VTLN prior for CSMAPLR

Target Speaker’s Model

Average Voice Model

2

3

1

!1 "1

!3 "3

!2 "2

!b

!b

: hyperparameter!b

TransformationFunction fk

µi =µi + "k

SMAP:BIAS

µi = !kµi + "k

SMAPLR

!i =! !i !!

k k

µi = !kµi + "k

CSMAPLR

µiMean Vector of Gaussian pdf i

!i Covariance Matrix of Gaussian pdf i

TransformationFunction fk

Voiced soundNo Yes

VowelDevoiced vowel

No Yes No Yes Threshold

Prior at the root node 5 sentences 2 sentences 1 sentence

Identity

VTLN

VTLN- EM estimation of an optimal warping

parameter for VTLN (all-pass filter)

- Represent the estimated VTLN function as a CMLLR feature space matrix

- Use the VTLN-CMLLR matrix as a prior for the MAP estimation of CMLLR matrices at the root nodes

- Structural recursive MAP estimation of CMLLR matrices at lower child nodes

32

Page 33: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Speaker adaptation techniques for TTS

Differences between ASR adaptation and TTS adaptation

33

Page 34: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Simultaneous adaptation of spectrum, fundamental frequency and duration

SpectrumF0

DurationAdaptation

1

2

3

4

5

1.6

2.5

1.5 1.6

3.32.6

1.5

3.62.9

SD

Average Voice

SDSD

+F0+DurationMethod

# of Subjects

# of Test Sentences

# of Adaptation

Data

CCR Test

8 persons

5 sentences

100 sentences

Similar

Dissimilar

34

Page 35: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Duration adaptation

Application to hidden-semi Markov models (HSMM) [S.E. Levinson ‘86]

3

: Duration Probability

1

1

1

1

1

2

2

2

2

2

3

3

3

3

pi(d) = |a|N (ad + b;µi,σ2i )

HSMM: HMM with explicit duration pdfs

SAT for duration pdfs: (1) Linear transform of duration (2) Inverse estimation of parameters for duration pdfs

35

Page 36: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Comparison with a simple duration warping method

4.6

4.8

5

5.2

5.4

5.6

5.8

6

6.2

RMSE

of V

owel

Dura

tion [

frame

]

0 50 100 150 200 250 300 350 400 450Number of Adaptation Sentences

Average VoiceMLLR

Warping Method

36

Page 37: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

The use of context decision trees as regression class trees

Devoicedvowel

Nasal Last moraof a phrase

no yesno yes

no yes

Vowel

Followingphonemeis voiced

Glottal

VoicedDecision tree of F0

W H,

Freq

uenc

y (H

z)

Frame50

100

150

200Reading Joyful / I - RA - RE - NA - I /

37

Page 38: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

A few examples of adaptation

• An American English female average voice to a 7-year-old girl• Average voice• Synthetic speech generated from adapted models

• An American English male speakers’ average voice to an Indian English speaker• Average voice • Synthetic speech generated from adapted models

• Adaptation from neutral style to anger with gradual linear interpolation

38

Page 39: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Contents

• Background• a brief introduction to HMM-based speech synthesis• basic principles of adaptation

• Applications1. Voice cloning2. Voice reconstruction 3. Personalised speech-to-speech translation 4. Articulatory-controllable speech synthesis

39

Page 40: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Celebrity voices Virtually unlimited number of voices Voice banking

Voice cloning

40

Page 41: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Voice cloning

• What do we mean by ‘voice cloning’ ?• automatically creating synthetic voices from relatively small recordings

• How is it different from conventional voice building?• less data• lower quality recordings• non-professional speakers• less (or no) manual intervention

• What’s the point?• fully automatic voice creation• cheap mass-production of voices• huge variety of voices possible using existing data

41

Page 42: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Application 1 – Celebrities voices

Speech data can be acquired from broadcast, podcasts, lectures, telephoneSynthetic speech samples created in this scenario

George W Bush podcast: Synthetic speech samples generated from HMMs adapted using speech data found on his podcasts

SampleReal-time demo [web]

Queen Elizabeth-II’s podcast Synthetic speech samples generated from HMMs adapted using speech data found on her podcasts

Sample

42

Page 43: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Technical problems behind this app

• Technical problems of speech synthesis using found audio (e.g. for celebrity voice ) are conceptually similar to those of LVCSR

• VAD • Noise reduction (spectral subtraction, noise gate etc)• Unsupervised speaker adaptation • etc

43

Page 44: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Unsupervised adaptation systems [for BL 2009]

• Multi-pass architecture for unsupervised adaptation using AMIRT06 LVCSR • LVCSR

• P1: VAD, speaker diarization followed by initial decoding using WFST decoder

• P2: VTLN and calculation of posteriori/tandem features • P3: CMLLR estimation, followed by using MPE/VTLN/SAT model • P4: Lattice expansion (2gram to 4gram) • P5: CMLLR estimation using rescored 1-best hypothesis • P6: Confusion network using word posteriors

• TTS• P7: Pruning based on confidence scores calculated from CN• P8: CMLLR/CSMAPLR estimation using TTS ML/SAT model

44

Page 45: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

ASR WERs

45

Page 46: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Pruning based on confidence scores calculated from confusion network

0

20

40

60

80

100

120

140

160

180

0.5 0.6 0.7 0.8 0.9 1

Freq

uenc

y

Sentence-level average condidnece score

Arctic100Arctic

Herald

46

Page 47: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Audio samples

0

20

40

60

80

100

120

140

160

180

0.5 0.6 0.7 0.8 0.9 1

Freq

uenc

y

Sentence-level average condidnece score

Arctic100Arctic

Herald

47

Page 48: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Subjective scores of synthetic speech

1

2

3

Supervised Unsupervised (P1) Unsuerpvised (Final)

2.52.32.7

MO

S

WSJCAM0

48

Page 49: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Another solution to improve the quality: the use of larger average voice models

1

1.5

2

2.5

3

RM (5h) WSJ0 (15h) WSJCAM0 (22h) WSJ0+WSJ1 (81h)

2.82.7

2.6

2.2

MO

S

49

Page 50: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Application 2 – Virtually unlimited number of speakers

• TTS systems having unlimited number of speakers • eBook reader: select voices depending on the character, subject, style, ..

• ASR corpora typically contain a large number of speakers • Good domain in which to demonstrate the impact of this scenario

• ASR corpora we have used so far• English: Resource management (RM), Wall-street journal WSJ0, WSJ1, &

Cambridge version of WSJ0 (WSJCAM0)• Spanish: Globalphone• Mandarin: Speecon (Speech databases for consumer devices)• Finnish: Speecon• Japanese: JNAS (Japanese newspaper article sentences)

50

Page 51: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Geographical view of many voices

51

Page 52: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Online TTS with the largest number of voices in the world (we think...)

Total number of voices

- English: 675

- Spanish: 106

- Mandarin: 500

- Finnish: 203

- Japanese: 106

- Total 1590

52

Page 53: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Technical problems behind these apps

• Speaker adaptive training • Selection of average voice models• How to select good data (low quality) • Vulnerability of speaker verification systems to voice clone

53

Page 54: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

“Vocal Terror” [BBC 2007]

54

Scientists warn of 'vocal terror'

By Liz Seward Science reporter, York

Computers could mimic human speech so perfectly that vocal terrorism could be a newthreat in 10-15 years' time, scientists suggest.

In the future, it may be possible to mimic someone's voice exactly after recording just one sentence.

Such technologies would pose a danger if it were not possible to verify who was speaking,researchers believe.

Scientists were predicting the future at the British Association (BA) Festival of Science in York.

Dr David Howard from the University of York said: "The reason things are changing is because nolonger are we using an acoustic model proposed in the 1950s."

“ It's not scaremongering; it's trying to say to people, 'we have to thinkabout these things' ” David Howard

New methods of creating computerised speech use models of a vocal tract to create a realisticsound, replacing the existing technique of copying sounds.

"We are beginning to simulate virtual vocal tracts in the computer," said Dr Howard.

"When we simulate this in the computer, which we are beginning to do, we begin to get sounds thatmusicians describe as organic or more natural.

"If we get to the point where we are synthesising the actual shape of somebody('s vocal tract) basedon analysis of their speech, then the speech we are producing should sound and look like the actualperson."

'Not scaremongering'

Worrying scenarios envisaged by the researchers included a phone call, apparently from your bankmanager, requesting you to confirm details of your account.

If the call actually came from a computer able to mimic the bank manager's voice flawlessly, youraccount could then be emptied by the people operating the computer.

Fraudsters are already making this kind of call; but the new technology could make them much moreconvincing.

It might become easier to make prank calls as well.

The terrorism aspect would come in if the technology were used for more malicious purposes, suchas someone taking over a communications network for a country and broadcasting a speechapparently from the country's leader.

"This gives rise to this notion of what I call vocal terrorism as a possible scenario in the future,which I'm suggesting one should be thinking about now and not thinking about when it happens.

"It's not scaremongering; it's trying to say to people, 'we have to think about these things'," hesaid.

Dr Howard made these predictions to challenge the view of young people on social responsibility.

Page 55: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Speaker verification vs Text-to-speech

• Voice cloning could be used for “breaking in” text-prompted speaker verification systems

• Attacking Scenarios to be assumed • Speech data is acquired from broadcast, podcasts, lectures, telephone

• Using the acquired speech data, adapt HMM-based TTS systems in advance

• Using the adapted models, synthesise speech for verifications

• How accurately can the HTS voices break in speaker verification system?

55

Page 56: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

GMM-UBM speaker verification system

- GMM-UBM

- 1024 components

- Features - 15 MFCC, 15 Δ-MFCC, log-energy, Δ log-enery- Feature warping to improve robustness [J. Pelecanos and Sridharan]

- Adaptation - MAP adaptation (mean vector only)

- Performance on the NIST 2002 corpus - 330 speakers- 12.10% EER- Comparable performance with [C. Longworth and M. Gales 2009]

56

Page 57: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

- Our scenario is not building TTS systems on speaker verification databases, which are normally narrow band with noises

- Wall street journal corpora (WSJ0 and WSJ1)- 283 speakers (included in SI-284 set)- Divide the SI-284 speaker material into 3 sets, A, B, and C- Set A: TTS training data

- Training of average voice models- Speaker adaptation (CMLLR) to individual speakers

- Set B: SV training data- Training of the universal background model - Speaker adaptation (MAP) to individual speakers

- Set C: Test data (30 sec/speaker)- Assumed to be speech reading text-prompts used for verifications

- Samples of synthetic speech created

Experiments – Data

57

Page 58: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Experiments – Performance of SV systems

• Decision-error-tradeoff (DET) curve for human speech

• Equal-error-rate is 0.4% (speaker verification of human speech on WSJ corpus is relatively easy)

0.1 0.2 0.3 0.4 0.5 0.6 0.70.1

0.2

0.3

0.4

0.5

0.6

0.7M

iss

dete

ctio

n pr

obab

ility

(%)

False alarm probability (%)

58

Page 59: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Experiments – Human speech vs. Synthetic speech

!4000 !2000 0 2000 4000 6000 80000

1

2

3

4

5

6

7

8x 10

!4

pdf

Log!likelihood ratio

Score Distributions

Human speech, True claimantHuman speech, ImposterSynthesized speech, Matched claimantSynthesized speech, Imposter

• Score distributions of human speech and synthetic speech

• In matched claimant tests (synthesized voices claim to be their human counterparts), about 90% of synthetic speech claims was accepted!

Human speech(Targeted speaker)

Synthetic speech(Targeted speaker)

Human speech(Imposter)

Synthetic speech(Imposter)

59

Page 60: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

How about SVM?

• Worse!! It accepts claims from synthetic speech more.

GMM-UBM SVM using GMM super-vectors

EER (human speech) 0.35% 0.35%

min DCF (human speech) 4.00E-03 2.40E-03

accepted claims from synthetic speech 91.5% 95.8%

60

Page 61: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Can we block the attack?

• YES!

• The current major vocoders: minimum phase vocoders • Human perception is less sensitive to phase differences • Assumed as not worth modelling

• Hint: Phase, Phase spectrum etc • Human is less sensitive perceptually, but differences between phase information

of real and current synthetic speech are “visible”

• GMMs trained on an analysis method called “relative phase shift (RPS)”• Results

• Correctly classify human speech in 95%• Correctly classify synthetic speech in 88%

61

Page 62: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Application 3 – Voice banking

• If people record a small amount of their speech data, they could have personalised voice communication aids in case of any vocal problems such as speech disorder in the future

• CSTR members who “voice banked”

62

Page 63: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Roger Elvert voice by Cereproc Ltd. [March 2010]

• Unit selection system using audio (commentary) included in many films

63

Page 64: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Clinical trial by the Sheffield group

07/09/2009 09:51Patients without vocal chords could have voice restored - Times Online

ページ 1/3http://www.timesonline.co.uk/tol/life_and_style/health/article6822662.ece

Get AwaySave up to 50% at more than 500superb hotels

September 5, 2009

Patients without vocal chords could havevoice restored

RECOMMEND?

RELATED LINKS

Trainee doctors lack

educational foundations

NHS expects 40,000 fewer

swine-flu deaths

David Rose, Health Correspondent

Patients who have had vocal cords removed could soon get their

own voice restored with a synthesised version.

Students and academics at the University of Sheffield have used

recordings and sampling technology to reconstruct the voice of

Bernadette Chapman, who had a laryngectomy operation to

remove her vocal cords after developing cancer.

Voice synthesising technology is already used by some patients

who have lost the ability to speak, such as Sir Stephen Hawking,

the eminent physicist.

But users have complained that the results sound like a “dalek”

and the latest research attempted to recreate speech patterns

that come close to sounding like a patient’s own voice.

Researchers took recordings of

Mrs Chapman’s voice prior to the

operation and, in collaboration

with Edinburgh University’s Centre

for Speech Technology Research,

used a speech synthesis

technique to adapt an “average

voice model” to sound like the

person concerned.

Mrs Chapman’s new voice was “built” using about seven minutes

of recorded speech, which amounted to 100 sentences. From

those samples, the researchers say it is possible to synthesise

any sentence by supplying the word sequence.

The technology is not available in a portable communication

device, but the researchers say that it is a matter of time before

patients with severe speech limitations or impediments, may be

able to type what they want to say by means of a portable

keyboard, or have a computer “translate” for them.

Professor Phil Green, from Sheffield University’s computer

science department, said: “Your voice is part of your identity, and

if this technique can help you to recover it and communicate in a

natural way your quality of life could be much improved.

“The technique is still evolving and not yet ready to be installed

on a handheld device but that is coming.”

Mrs Chapman, from Lincoln, said: “For many years the Servox

machine, or artificial larynx, has been the main means of

communication for patients after laryngectomy or for those who

have had severe speech impairment. The machine tends to

sound very like a dalek and can be embarrassing to use,

especially in public places.

The NHS:enormous,expensive and stillgrowingTim Glanfield looks behind thestory of the NHS staff cuts

NUTRITION

Are there healthyalternatives tosugar?Amanda Ursell answers yourdietary dilemmas

EXPLORE HEALTH

EXPERT ADVICE

HEALTH FEATURES

MENTAL HEALTH

ALTERNATIVE MEDICINE

CHILD HEALTH

HEALTH CLUB

MOST READ MOST COMMENTED MOST CURIOUS

Business Travel:Everything the Business Travellerneeds to know to make a better trip

Northern Lights

Need to Know

Business Travel

Great British Summer

Corsica Travel

More reports

TIMES HEALTH CLUB

MonthlychallengeSwim the length of theChannel in September

Online Sudokuwith daily prizes

COMPARE & BUY

Dental InsuranceFind the best quote

TODAY

Honeytrap teen Samantha Joseph jailed for 10...

Top terror suspect is freed over secrets fear

The 50 Biggest Movies of 2009

Great whites spotted as Labor Day swimmers...

FOCUS ZONE

Health Insurance

I’d like to be considered forthe bigot of the year award RodLiddle

NEWS COMMENT BUSINESS MONEY SPORT LIFE & STYLE TRAVEL DRIVING ARTS & ENTS ARCHIVE OUR PAPERS SUBSCRIPTIONS

EDUCATION FOOD & DRINK HEALTH PROPERTY COURT & SOCIAL WOMEN MEN DATING ALPHA MUMMY FASHION HOROSCOPES

Times OnlineWhere am I? Home Life & Style Health

MY PROFILE SHOP JOBS PROPERTY CLASSIFIEDSFrom The Times

Chapman’s real speech

Chapman’s synthetic speech

64Times 2009

HTS voice can be adapted only from 100 sentences

Page 65: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Contents

• Background• a brief introduction to HMM-based speech synthesis• basic principles of adaptation

• Applications1. Voice cloning2. Voice reconstruction 3. Personalised speech-to-speech translation 4. Articulatory-controllable speech synthesis

65

Page 66: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Voice reconstruction –Personalised voice communication devices for people with vocal disabilities

Credits: Sheffield University ; The Euan McDonald Centre for Motor Neurone Disease Research ; The Anne Rowling Regenerative Neurology Clinic

66

Page 67: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Neurological conditions or diseases which can result in a vocal pathology

• Amyotrophic Lateral Sclerosis (ALS) / Motor Neurone Disease (MND)• Autism • Stroke Aphasia • Cerebral Palsy • Parkinson’s Disease• Multiple Sclerosis

• MND incidence (new cases per year) is about 2 per 100,000 people

• Across all neuro-degenerative diseases: about 20 million new cases per year

67

Page 68: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

An interview with a patient with MND

68

Page 69: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

What do these people use?

• Augmentative and Alternative Communication (AAC)• Voice Output Communication Aid (VOCA)

• Some diseases cause problems with the hands as well as the voice• so alternative forms of input are required

• Some diseases cause problems only with the voice• so ‘type to talk’ interfaces are possible

• The current VOCA on the market provide a small and inappropriate range of voices

• We want to provide personalised speech synthesis voices (which sound like the individual users)

69

Page 70: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Can we use speaker adaptation? – Ideal and the real

• For such people with vocal problems, speech synthesis is not just an optional extra to reading out text, but a critical function for • social communication, and • personal identity

• We can use speaker adaptation for creating the personalised voices only• If their speech was voice banked in prior,• if they are diagnosed at very very early stage, or• if they recorded their speech in acceptable clean conditions by themselves

• However, patients tend to arrive at the clinic *only* after vocal problems are moderate to severe

70

Page 71: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

What happen if you use speaker adaptation on disordered speech?

• Original data: • 3 mins• previously recorded interview made in office environment• subject already had MND at that point - speech is already disordered

• Speaker adaptation • voice clone of disordered speech

71

Page 72: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Voice reconstruction

• We should extend our adaptation technique so that it can be applied even if speech has already be disordered at the time of recording

• Paradox • Recover speaker characteristics as much as possible • But we do not want to reproduce the symptoms due to vocal problems

• How ?• Separation speaker characteristics and vocal problems? (a hard problem) • Perform ‘surgery’ on the adapted statistical models

• fixing statistical models so that they can generate natural sounding speech while keeping speaker identity

• From voice cloning to voice reconstruction

72

Page 73: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Voice reconstruction using average voice models [S. Creer et al. 2010]

• Some acoustic models are *relatively* less speaker-dependent

• Therefore, substitute • Adapted duration model• Adapted GV models (spectrum, logF0)• Adapted aperiodicity model

• into those of average voice models

• Example• Substitution using a WSJCAM0 average voice model

• We can still hear problems of 1) the accent and 2) coarticulation • Reproduction of Edinburgh accent is not perfect • Bad coarticulation of diphthongs and long vowels

73

Page 74: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Voice reconstruction using voice donor

• For coarticulation, we need to “fix” variances of static spectral features, delta features

• However, they are more speaker specific features • Substitution by average voice model may result in lower similarity

• Substitution (excluding lower order part of static features) into models trained on speakers that match with a target patient in terms of • age, accent, social class, similar vocal tract shape etc • “Voice donor”

• Example of the voice donor • Close relative

74

Page 75: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Advantages of voice donor

• Voice donor can help people with vocal problems simply by speaking

• The donor’s speech is also automatically voice banked• Some diseases are genetic • Good and another motivation for the recording of close relative’s speech

• They too could have a personalised voice communication aid in case of any vocal problems in the future

• Better awareness of the importance of “voice banking” amongst the public

75

Page 76: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Conceptual flow: Summary

• Original data: • 3 mins• previously recorded interview made in office environment• subject already had MND at that point - speech is already disordered

• Speaker adaptation • voice clone of disordered speech

• Voice reconstruction using voice donor • fixing statistical models so that they can generate natural sounding speech

while keeping speaker identity

• Voice banking of voice donor

76

Page 77: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

A personalised voice output communication device with eye tracking

77

Page 78: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Voice banking and voice reconstruction

• Voice banking and voice reconstruction will be carried out• Euan McDonald Centre for MND research • Anne Rowling Regenerative Neurology Clinic (under construction)• Barnsley Hospital (MyVOCA project of the Sheffield University)

• Voice reconstruction: 50 patients per year • Voice banking and donor: over 50 people per year

• Future work • Speaker adaptation using hierarchical average voice models and/or automatic

selection of donors • Automatic vocal function assessment, followed by automatic model

substitution or interpolation

78

Page 79: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Contents

• Background• a brief introduction to HMM-based speech synthesis• basic principles of adaptation

• Applications1. Voice cloning2. Voice reconstruction 3. Personalised speech-to-speech translation 4. Articulatory-controllable speech synthesis

79

Page 80: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Personalised speech-to-speech translationusing cross-lingual speaker adaptation

80

Page 81: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Speech-to-speech translation

Speech-to-speech translation (S2ST): flow

Good morningS2ST

systemInput language (L1)

おはよう

ASR MT TTS

HMMs(English)

HMMs(Japanese)CLSA

Output language (L2)

CLSA: cross-lingual speaker adaptation

EMIMEsystem

81

Page 82: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Cross-lingual adaptation based on state-mapping

Clustering

Context Dependent HMMs

Yes No

NoYes

The current phoneme is voiced

Next phoneme is voiced

Clustering

Context Dependent HMMs

Yes No

NoYes

The current phoneme is voiced

Next phoneme is voiced

transform function 2transform function 1

transform function 3

82

Page 83: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Details of symmetrized KLD-based state-similarity measure

Definition of the problem of KLD-based state mapping:Average voice model for output language :Average voice model for input language

Definition of the symmetrized KLD between states i and j

: Mean vectors of average voice models for input and output languages: Covariance matrices of average voice models for input and output languages

83

Page 84: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

English-to-Japanese adaptation: Experimental conditions

Database WSJ0 (15 hours, 84 speakers)

Analysis window 25ms Hamming window

Acoustic feature 13-th PLP +

Sampling rate 16kHz

Frame shift 10ms

Database WSJ0 (15 hours, 84 speakers)

Analysis window 25ms Hamming window

Acoustic feature 39-th STRAIGHT mel-Cepstrum +

Sampling rate 16kHz

Frame shift 5ms

Log F0 +

5 band-filtered aperiodicity measures +

Database JNAS (19 hours, 86 speakers) Sampling rate 16kHz

Analysis window 25ms Hamming window

Acoustic feature 39-th STRAIGHT mel-Cepstrum +

Log F0 +

Frame shift 5ms

5 band-filtered aperiodicity measures +

English ASR

English TTS

Japanese TTS

84

Page 85: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Vowel comparisons

2000 1500 1000 500

800

700

600

500

400

300

F1

2000 1500 1000 500

800

700

600

500

400

300

F1i

e

a

o

m

Japanese phonemes (cross-lingual adaptation)

General American English phonemes

85

Page 86: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Demo 1: WSJ0 speaker 001

Averagevoice

5 sentences

50 sentences

2000 sentences

Target speaker

86

Page 87: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Original speech

English TTS adapted to James

Japanese TTS adapted to James

Sample 1 Sample 2

Demo 2: BBC presenter James Cook

• Ideal conditions

• speech data recorded in controlled recording studio

• oracle ASR and MT

• But quality of Japanese voice limited by lack of speech databases recorded at higher sampling rates

87

Page 88: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Contents

• Background• a brief introduction to HMM-based speech synthesis• basic principles of adaptation

• Applications1. Voice cloning2. Voice reconstruction 3. Personalised speech-to-speech translation 4. Articulatory-controllable speech synthesis

88

Page 89: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Articulatory-controllable statistical speech synthesis

Credits: Zhenhua Ling (University of Science and Technology of China) & Korin Richmond (CSTR) ; LISTA project

89

Page 90: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Motivation

• Speaker adaptation requires “too much” adaptation data • because it’s a shallow method• adaptation takes place at a surface level

• features or model parameters• no ‘deep model’ underlying the adaptation process• just a non-linear transform of the whole feature (or model) space

• How about speech modification without requiring new speech data?• perhaps based on other information, such as articulation, listener

characteristics, environment, ...• our starting point: knowledge of speech production, in the form of articulatory

measurement data

90

Page 91: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Articulatory data used

• Male native British English (RP accent) speaker• 1,263 phonetically-balanced utterances• 7 articulatory points: UL, LL, LI, TT, TB, TD• Carstens 3D Electromagnetic Articulograph• Audio suitable for speech synthesis

91

Page 92: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Measurement points (coil locations)

92

Page 93: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Model joint distributions of acoustic and articulatory parameters Acoustic distributions (for spectral parameters) dependent on articulation Dependency = linear transform

No loss of quality Note: can use articulatory function to modify

Introducing articulation into the HMM

yt

Acoustic only

EMA + acoustic

93

Page 94: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Articulatory modification of a vowel

94

Page 95: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Change tongue height = change the vowel

+1.5

+1.0

+0.5

default

-0.5

-1.0

-1.5

ledpecksetTo

ngue

hei

ght (

cm)

95

Page 96: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Perceptual test results

20 listeners, lab condition

results pooled across speakers and words

Articulatory modification changes vowel quality as we expected!

96

Page 97: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Sample: spectrogram

97

Page 98: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Change tongue position – Change vowel !! (Sample 2)

“him”“him”

defaultTongue height -1 cm Tongue position +2 cm

98

Page 99: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Creating stimuli for speech perception experiments

Credit: Tian-Yi Zhao (USTC)

99

Page 100: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Acoustic-to-articulatory inversion (application: computer-assisted pronunciation learning)

Credit: Tian-Yi Zhao (USTC)100

Page 101: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Change dynamics of tongue movement – Hypo- and hyper-articulation!

• Dynamic range scaling of the generated articulatory trajectory mimics hyper-articulation

• This can make the synthetic speech more intelligible, especially in noisy conditions

• Intelligibility tests of synthetic speech in noise • babble noise recorded in a dining hall • 5 dB speech-to-noise ratio• 1.2 times scaling of z-axis led to a reduction in the WER of synthetic speech

in noise from 63% to 48%

normal (1.0)hypo-articulation (0.8)hyper-articulation (1.2)

101

Page 102: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Noise-adaptive speech synthesis

• Usefulness of hyper articulated speech • Car navigation systems • Dialog systems in noisy places • Any TTS devices in noise

• Examples • Car noises (Toshiba car noises)

• Land Rover• Highway • Windy

Normal speech Hyper-articulated speech

Hyper-articulated + spectral-tilt-modified

speech 102

Page 103: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Conclusions

103

Page 104: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Conclusions

• Thanks to statistical and machine learning approaches, speech synthesis can do more than just read out text in a predefined voice.

• New research areas and interesting applications are emerging

• Now text-to-speech synthesis systems have • the adaptability and flexibility to

• clone users’ voices • reconstruct patients’ voices from disordered speech

• and the controllability to• manipulate speech via articulatory parameters• enhance intelligibility in noise

104

Page 105: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

• TTS research so far: the wheel of history

• TTS now has quality, flexibility and controllability • Future TTS research should improve all these aspects simultaneously and should

make the ‘next step’ rather just a ‘second turn’ of the wheel

A “take-home” message

Model based

(parametric)

Unit-based

(concatenative)

1980s-90

Diphone synthesis

1990s-2000s

Unit selection

1970s: Formant synthesis

2000s

HMM based synthesis

Improved quality

Improved control

105

Page 106: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Credits and acknowledgements

• Core HMM synthesis techniques• K. Tokuda (NIT), H. Zen (Toshiba), T. Toda (NAIST), K. Oura (NIT), Y. Wu (Microsoft), M. Takashi

(Toshiba), T. Kobayashi (TIT), T. Nose (TIT), M, Tachibana (Yamaha) • Latest HTS samples in English, Spanish, and Romanian

• O. Watts (CSTR), R. Barra Chicote (UPM), A. Stan (University of Cluj-Napoca) • 1000s HTS voices

• B. Usabaev, J. Tian, Y. Guan (Nokia), A. Suni, T. Raitio, M. Vainio, P. Alku, R. Karhila, M. Kurimo (TKK/AALTO), J. Dines (IDIAP)

• Speaker verification vs text-to-speech • P. De Leon (NMSU), I. Hernaez, I. Saratxaga (Univ. Basque Country), Michael Pucher (ftw)

• Celebrity voices • M. Aylett (Cereproc), Z. Ling (USTC)

• Child speech synthesis• O. Watts (CSTR), K. Berkling (Germany)

• Cross-lingual speaker adaptation• Y. Wu, K. Oura, K. Tokuda (NIT), J. Dines, H. Liang, L. Saheer (IDIAP), M. Gibson, W. Byrne

(Cambridge), M. Wester (CSTR), and J. Cook (BBC) • Voice banking and reconstruction

• S. Creer, P. Green, S. Cunningham (University of Sheffield), E. MacDonald, S. Chandran (EMC) • Articulatory modification of synthetic speech

• Z. Ling (USTC), K. Richmond (CSTR), T. Zhao (USTC), C. Valentini (CSTR) 106

Page 107: Centre for Speech Technology Researchmi.eng.cam.ac.uk/foswiki/pub/Main/SeminarsSpeech/... · • A linear transform is applied to every HMM parameter (Gaussian mean and variance)

Suggested follow-up reading

• 1000s HTS voices • J. Yamagishi, B. Usabaev, S. King, O. Watts, J. Dines, J. Tian, R. Hu, Y. Guan, K. Oura, K. Tokuda, R. Karhila, M.

Kurimo “Thousands of Voices for HMM-based Speech Synthesis -- Analysis and Application of TTS Systems Built on Various ASR Corpora” IEEE Audio, Speech, & Language Processing, vol.18, issue.5, pp.984-1004, July 2010

• Speaker verification vs text-to-speech • P. De Leon, M. Pucher, J. Yamagishi “Evaluation of the vulnerability of speaker verification of synthetic speech”, Proc.

Odyssey 2010 (The speaker and language recognition workshop), pp151-158, July 2010

• Robust speech synthesis and celebrity voices • J. Yamagishi, T. Nose, H. Zen, Z. Ling, T. Toda, K. Tokuda, S. King, S. Renals, “A Robust Speaker-Adaptive HMM-

based Text-to-Speech Synthesis,”IEEE Audio, Speech, & Language Processing, vol.17, no.6, pp.1208-1230, August 2009

• Child speech synthesis • O. Watts, J. Yamagishi, S. King, K. Berkling,“Synthesis of Child Speech with HMM Adaptation and Voice Conversion”

IEEE Audio, Speech, & Language Processing, vol.18, issue.5, pp.1005-1016, July 2010

• Unsupervised Cross-lingual speaker adaptation• J. Dines, H. Liang, L. Saheer, M. Gibson, W. Byrne, K. Oura, K. Tokuda, J. Yamagishi, S. King, M. Wester, T. Hirsimaki,

R. Karhila, M. Kurimo. “Personalising Speech-to-Speech Translation: Unsupervised Cross-lingual Speaker Adaptation for HMM-based Speech Synthesis,” under review

• Voice reconstruction

• S. Creer, P. Green, S. Cunningham, and J. Yamagishi, “Building personalised synthesised voices for individuals with dysarthria using the HTS toolkit,” Computer Synthesized Speech Technologies: Tools for Aiding Impairment John W. Mullennix and Steven E. Stern (Eds), IGI Global press, Jan. 2010. ISBN: 978-1-61520-725-1

• Articulatory modification of synthetic speech

• Z-H. Ling, K. Richmond, J. Yamagishi, R.-H. Wang “Integrating Articulatory Features into HMM-based Parametric Speech Synthesis, IEEE Audio, Speech, & Language Processing.vol.17 No.6 pp.1171-1185 August 2009 (IEEE Signal Processing society Young Author Best Paper Award 2010) 107