14
1 © R.A. Rutenbar 2006 In Silico Vox: Towards Speech Recognition in Silicon In Silico Vox: Towards Speech Recognition in Silicon Edward C. Lin, Kai Yu, Rob A. Rutenbar, Tsuhan Chen Electrical & Computer Engineering {eclin, kaiy, rutenbar, tsuhan}@ece.cmu.edu © Rob A. Rutenbar 2006 Slide 2 CarnegieMellon Speech Recognition Today Quality = OK Vocab = large Quality = poor Vocab = small Commonality: all software apps

In Silico Vox: Towards Speech Recognition in Siliconrutenbar/pdf/rutenbar-hotchips06.pdf · Audio-mining Very fast recognizers – much faster than realtime App: search large media

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: In Silico Vox: Towards Speech Recognition in Siliconrutenbar/pdf/rutenbar-hotchips06.pdf · Audio-mining Very fast recognizers – much faster than realtime App: search large media

1

© R.A. Rutenbar 2006

In Silico Vox:Towards Speech Recognition in SiliconIn Silico Vox:Towards Speech Recognition in SiliconEdward C. Lin, Kai Yu, Rob A. Rutenbar, Tsuhan Chen Electrical & Computer Engineering{eclin, kaiy, rutenbar, tsuhan}@ece.cmu.edu

© Rob A. Rutenbar 2006 Slide 2

CarnegieMellon

Speech Recognition Today

Quality = OK Vocab = large

Quality = poor Vocab = small

Commonality: all software apps

Page 2: In Silico Vox: Towards Speech Recognition in Siliconrutenbar/pdf/rutenbar-hotchips06.pdf · Audio-mining Very fast recognizers – much faster than realtime App: search large media

2

© Rob A. Rutenbar 2006 Slide 3

CarnegieMellon

Today’s Best Software Speech Recognizers

Best-quality recognition is computationally hardFor speaker-independent, large-vocabulary, continuous speech

1-10-100-1000 ruleFor ~1X real-time recognition rateFor ~10% word error rate (90% accuracy)Need ~100 MB memory footprintNeed ~100 W powerNeed ~1000 MHz CPU

This proves to be very limiting …

© Rob A. Rutenbar 2006 Slide 4

CarnegieMellon

The Carnegie Mellon In Silico Vox Project

The thesis: It’s time to liberate speech recognition from the unreasonable limitations of software

The solution: Speech recognition in silicon

Why…?Tomorrow’s compelling apps need 100X – 1000X performance improvements to accomplish. (Not going to happen in software)

We have some successful historical examples of this migration

Page 3: In Silico Vox: Towards Speech Recognition in Siliconrutenbar/pdf/rutenbar-hotchips06.pdf · Audio-mining Very fast recognizers – much faster than realtime App: search large media

3

© Rob A. Rutenbar 2006 Slide 5

CarnegieMellon

History: Graphics Engines

Nobody paints pixels in software anymore!Too limiting in max performance. Too inefficient in power.

http://www.nvidia.com

http://www.mtekvision.com

True on the desktop (& laptop) …and on your cellphone too

© Rob A. Rutenbar 2006 Slide 6

CarnegieMellon

Next-Gen Compelling Applications

Audio-miningVery fast recognizers –much faster than realtimeApp: search large media streams (DVD) quickly

Hands-free appliancesVery portable recognizers –high quality result on << 1 wattApp: interfaces to small devices, cellphone dictation

Terminator 2

FIND: “Hasta la vista, baby!”

“send email to arnold –let’s do lunch…”

Page 4: In Silico Vox: Towards Speech Recognition in Siliconrutenbar/pdf/rutenbar-hotchips06.pdf · Audio-mining Very fast recognizers – much faster than realtime App: search large media

4

© Rob A. Rutenbar 2006 Slide 7

CarnegieMellon

Our Focus: How to Get to Fast…

Audio-miningVery fast recognizers –much faster than realtimeApp: search large media streams (DVD) quickly

Hands-free appliancesVery portable recognizers –high quality result on << 1 wattApp: interfaces to small devices, cellphone dictation

Terminator 2

FIND: “Hasta la vista, baby!”

“send email to arnold –let’s do lunch…”

© Rob A. Rutenbar 2006 Slide 8

CarnegieMellon

About This Talk

The $2 tour: How speech recognition worksWhat happens in a recognizer

An ASIC architectureStripping away all CPU stuff we don’t need, focus on essentials

Results ASIC version: Cycle simulator resultsFPGA version: Live, running hardware-based recognizer

Page 5: In Silico Vox: Towards Speech Recognition in Siliconrutenbar/pdf/rutenbar-hotchips06.pdf · Audio-mining Very fast recognizers – much faster than realtime App: search large media

5

© Rob A. Rutenbar 2006 Slide 9

CarnegieMellon

How Speech Recognition Works

ADCADC

Filter1Filter2Filter3

FilterN

...

Filter1Filter2Filter3

FilterN

...

x1x2x3...xn

x1x2x3...xnSampling

Feature extraction

Featurevector

DSP

ωω

1

a11

a12

b1(.)

2

a22

a23

b2(.)

3

a33

a34

b3(.)

1

a11

a12

b1(.)

1

a11

a12

b1(.)

2

a22

a23

b2(.)

2

a22

a23

b2(.)

3

a33

a34

b3(.)

3

a33

a34

b3(.)

Acoustic

“Rob”

Adaptation

HMMSearch

Word...Rob R AO BBob B AO B...

Language

Rob saysAdaptation

Acoustic units Words Language

Adaptation toenvironment/speaker

“ao”

FeatureScoring

Acoustic Frontend

Scoring Backend Search

HMM / Viterbi SearchLanguage Model Search

© Rob A. Rutenbar 2006 Slide 10

CarnegieMellon

(1) Acoustic Frontend

0 0.2 0.4 0.6 0.8 1 1.2

0

2

4

6

8

10

12

The frontend is all DSP. A discrete Fourier transform (DFT) gives us the spectra. We combine and logarithmically transform spectra in ways motivated by physiology of human ears.

time

Frame

Combine these with estimates of 1st and 2nd

time derivatives

Color is “how much energy”. in transformed spectra.Green = low, red = high.

This pic is across a few sec of speech.

Color is “how much energy”. in transformed spectra.Green = low, red = high.

This pic is across a few sec of speech.

ADCADC

Filter1Filter2Filter3

FilterN

...

Filter1Filter2Filter3

FilterN

...

x1x2x3...

xn

x1x2x3...

xnSamplingExtraction Feature

vector

DSP

ωω

AdaptationAdaptation

Feature

Page 6: In Silico Vox: Towards Speech Recognition in Siliconrutenbar/pdf/rutenbar-hotchips06.pdf · Audio-mining Very fast recognizers – much faster than realtime App: search large media

6

© Rob A. Rutenbar 2006 Slide 11

CarnegieMellon

Each feature is a point in high-dimensional spaceEach “atomic sound” is a region of this spaceScore each atomic sound with Probability(sound matches feature)

Note: (sounds) X (dimensions) X (Gaussians) = BIG

(2) Scoring Stage

Feature

Each sound is aGaussian Mixture

Feature VectorX=(x1, x2, …, xn)

21)(

7

0

2

)(

2,

2,

2,

i

x

i

iss ewxSCORE is

is

=

−−

= ∑πΛ

σ

μ39

1j=∑ j

1)(7

0

2

)(

2,

2,

2,

i

x

i

iss ewxSCORE is

is

=

−−

= ∑πΛ

σ

μ39

1j=∑ j

Each sound approximated as a set of high-dim Gaussian densities

© Rob A. Rutenbar 2006 Slide 12

CarnegieMellon

(3) Search: Speech Models are Layered Models

YES

NO

/Y/ /EH/ /S/

/N/ /OW/

Waveform

Pitch

Power

Spectrum

15 ms

0 ms

0 kHz

0 dB6 kHz

80 dB0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Language X Words X Acoustic Layered Search

words “acousticunits”

1 frame ofsampled sound

“sub-acousticunits”

Classical methods(HMMs, Viterbi)

and idiosyncracies

Page 7: In Silico Vox: Towards Speech Recognition in Siliconrutenbar/pdf/rutenbar-hotchips06.pdf · Audio-mining Very fast recognizers – much faster than realtime App: search large media

7

© Rob A. Rutenbar 2006 Slide 13

CarnegieMellon

Context Matters: At Bottom -- Triphones

English has ~50 atomic sounds (phones) but we recognize ~50x50x50 context-dependent triphones

Because “AY” sound in “five” is different than the “AY” in “nine”

Five F(-,AY)cross-word AY(F,V)word-internal V(AY,-)cross-word

Nine N(-,AY)cross-word AY(N,N)word-internal N(AY,-)cross-word

“AY” in “five” != “AY” in “nine”

© Rob A. Rutenbar 2006 Slide 14

CarnegieMellon

Similar Idea at Top: N-gram Language Model

Unigram

Trigram

Bigram

W13W123…

W122

W111

W112

W113

W114

W121

W11

W12

W13

W14

W1

W2

W3

W4

termination

Lets us calculate likelihood of word W3after W2after W1

Suppose we have vocabulary { W1, W2, W3, W4, …}

“W1” “W1 W2”

“W1 W2 W3”

Page 8: In Silico Vox: Towards Speech Recognition in Siliconrutenbar/pdf/rutenbar-hotchips06.pdf · Audio-mining Very fast recognizers – much faster than realtime App: search large media

8

© Rob A. Rutenbar 2006 Slide 15

CarnegieMellon

Good Speech Models are BIG

This is ~64K word “Broadcast News” taskUnfortunately, many idiosyncratic details in how layers of model traversed

Waveform

Pitch

Power

Spectrum

15 ms

0 ms

0 kHz

0 dB6 kHz

80 dB0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

for each 10 ms time frame

5156Scores

111,593Triphone

HMMs

64,001Unigrams

9,382,014Bigram

13,459,879 Trigram

...... ...

...

...

© Rob A. Rutenbar 2006 Slide 16

CarnegieMellon

19%

6% 15%

27%

33%

ADCADC

Filter1Filter2Filter3

FilterN

...

Filter1Filter2Filter3

FilterN

...

x1x2x3...xn

x1x2x3...xnSampling

Feature extraction

Featurevector

DSP

ωωωω

1

a11

a12

b1(.)

2

a22

a23

b2(.)

3

a33

a34

b3(.)

1

a11

a12

b1(.)

1

a11

a12

b1(.)

2

a22

a23

b2(.)

2

a22

a23

b2(.)

3

a33

a34

b3(.)

3

a33

a34

b3(.)

1

a11

a12

b1(.)

1

a11

a12

b1(.)

2

a22

a23

b2(.)

2

a22

a23

b2(.)

3

a33

a34

b3(.)

3

a33

a34

b3(.)

1

a11

a12

b1(.)

1

a11

a12

b1(.)

2

a22

a23

b2(.)

2

a22

a23

b2(.)

3

a33

a34

b3(.)

3

a33

a34

b3(.)

2

a22

a23

b2(.)

2

a22

a23

b2(.)

3

a33

a34

b3(.)

3

a33

a34

b3(.)

3

a33

a34

b3(.)

3

a33

a34

b3(.)

Acoustic

“Rob”

Adaptation

HMMSearch

Word...Rob R AO BBob B AO B...

Language

Rob saysAdaptation

Acoustic units Words Language

Adaptation toenvironment/speaker

“ao”

FeatureScoring

HMM / Viterbi SearchLanguage Model Search

Where Does Software Spend its Time?CPU time for CMU Sphinx 3.0

Prior studies targeted less capable versions (v1, v2)SimpleScalar & Intel Vtune64K-word “Broadcast News” benchmark

So: It’s all backend

~0% of time!

Page 9: In Silico Vox: Towards Speech Recognition in Siliconrutenbar/pdf/rutenbar-hotchips06.pdf · Audio-mining Very fast recognizers – much faster than realtime App: search large media

9

© Rob A. Rutenbar 2006 Slide 17

CarnegieMellon

Memory Usage? SPHINX 3.0 vs Spec CPU2000

Cache sizesL1: 64 KB, direct mappedDL1: 64 KB, direct mappedUL2: 512 KB, 4-way set assoc

So…Terrible locality (no surprise, graph search + huge datasets)Load dominated (no surprise, reads a lot, computes a little)Not an insignificant footprint

42 MB186 MB24 MB64 MB

Memory Footprint

0.300.030.060.48L2

0.030.020.020.04 DL1Cache Miss Rates

0.020.080.070.025

Branch Misprediction Rates

0.120.170.20.14Branch’s

0.080.090.150.05Stores

0.270.20.250.27LoadsInstruction Mixes

0.71.050.290.69IPC

23 B15 B55 B53 BCycles

EquakeGzipGccSPHINX 3.0

© Rob A. Rutenbar 2006 Slide 18

CarnegieMellon

A Silicon Architecture: Big Picture

AcousticFrontend

GaussianScoring

BackendSearch

KB KBMBs SRAM

(active recog)SRAM(constants)

SRAM(communic)

10-100MB DRAM

(constants)

10-100MB DRAM

(models &active recog)

Computations (Ops) Low High MediumSRAM (size) Small Small LargeDRAM (size) -- Medium/Large LargeDRAM (bandwidth) -- High High

Page 10: In Silico Vox: Towards Speech Recognition in Siliconrutenbar/pdf/rutenbar-hotchips06.pdf · Audio-mining Very fast recognizers – much faster than realtime App: search large media

10

© Rob A. Rutenbar 2006 Slide 19

CarnegieMellon

Essential Implementation Ideas

Custom precision, everywhereEvery bit counts, no extras, no floating point – all fixed point

(Almost) no cachingLike graphics chips: fetch from SDRAM, do careful data placement(Little bit of caching for bandwidth filtering on big language models)

Aggressive pipeliningIf we can possibly overlap computations – we try to do so

Algorithm transformationSome software computations are just bad news for hardwareSubstitute some “deep computation” with hardware-friendly versions

© Rob A. Rutenbar 2006 Slide 20

CarnegieMellon

Example: Aggressive Pipelining

timeFetch HMM/Viterbi

Transition/Prune/WritebackFetch Word

Fetch HMM/Viterbi

Transition/Prune/Writeback Language Model

Fetch HMM/Viterbi

Transition/Prune/WritebackFetch Word

Fetch HMM/Viterbi

Transition/Prune/Writeback

Language Model

Fetch Word Language Model

Fetch HMM/Viterbi

Transition/Prune/WritebackFetch Word

Fetch HMM/Viterbi

Transition/Prune/WritebackFetch Word

Fetch HMM/Viterbi

Transition/Prune/Writeback

Pipelined Get-HMM/Viterbi and Transition stages

Pipelined Get-Word and Get-HMM stages

Pipelined non-LanguageModel and LanguageModel stages

Word #1Word #2HMM #1HMM #2

HMM #3

Page 11: In Silico Vox: Towards Speech Recognition in Siliconrutenbar/pdf/rutenbar-hotchips06.pdf · Audio-mining Very fast recognizers – much faster than realtime App: search large media

11

© Rob A. Rutenbar 2006 Slide 21

CarnegieMellon

Example: Algorithmic ChangesAcoustic-level pruning threshold

Software: Use best score of current frame (after Viterbi on Active HMMs) Silicon: Use best score of previous frame (nixes big temporal bottleneck)

TradeoffsLess memory bandwidth, can pipeline, little pessimistic on scores

Sphinx 3.0 Silicon

Initialize Frame

Fetch Active Word

Fetch Active HMM

Viterbi

Transition/Prune/Writeback

LanguageModel

Start Frame

Done

Initialize Frame

Fetch Active Word

Fetch Active HMM

Viterbi

Transition/Prune/Writeback

LanguageModel

Start Frame

DoneFetch Active Word

Fetch Active HMM

Viterbi

Writeback HMM Transition/Prune/Writeback

Language Model

Initialize Frame

Fetch Active Word

Fetch Active HMM

Done

Done all Active HMM

Done all Active HMM

Fetch Active Word

Fetch Active HMM

Viterbi

Writeback HMM Transition/Prune/Writeback

Language Model

Initialize Frame

Fetch Active Word

Fetch Active HMM

Done

Done all Active HMM

Done all Active HMM

© Rob A. Rutenbar 2006 Slide 22

CarnegieMellon

Predicted Performance: C++ Cycle SimulatorBenchmark: 5K-word “Wall Street Journal” taskResults:

No accuracy loss; not quite 2X @ 125MHz ASIC clockBackend search needs: ~1.5MB SRAM, ~30MB DRAM

1.67X

0.59X

1.05X

0.82X

0.74X

Speedup Over Real Time

(bigger is better)

0.125 GHz

2.8 GHz

1 GHz

1 GHz

1 GHz

Clock (GHz)

6.707%Software: Sphinx 3.0 (single CPU)

6.725%Hardware: Our Proposed Recognizer

6.97%Software: Sphinx 4 (single CPU)

6.97%Software: Sphinx 4 (dual CPU)

7.32%Software: Sphinx 3.3 (fast decoder)

Word Error Rate (%)Recognizer Engine

Page 12: In Silico Vox: Towards Speech Recognition in Siliconrutenbar/pdf/rutenbar-hotchips06.pdf · Audio-mining Very fast recognizers – much faster than realtime App: search large media

12

© Rob A. Rutenbar 2006 Slide 23

CarnegieMellon

Aside: Bit-Level Verification Hurts (A Lot)

We have newfound sympathy for others doing silicon designs that handle large media streams

Generating these sort of tradeoff curves: CPU days weeks

01234567

0 500 1000 1500

Spee

dup

Speedup vs Software (over 330 WSJ utterances)

Time (10ms frames)

All SRAM

1.5MB SRAM + 30MB DRAM

SPHINX 3.0 software

ALL DRAM

© Rob A. Rutenbar 2006 Slide 24

CarnegieMellon

A Complete Live Recognizer: FPGA DemoIn any “system design” research, you reach a point where you just want to see it work – for real

Goal: Full recognizer 1 FPGA + 1 DRAM

A benchmark that fits on chip1000-word “Resource Mgt” task Slightly simplified: no tri-gramsSlower: not real time, ~2.3X slowerResource limited: slices, mem bandwidth

ADCADC

Filter1Filter2Filter3

FilterN

...

Filter1Filter2Filter3

FilterN

...

x1x2x3...

xn

x1x2x3...

xnSamplingFeature

extractionFeaturevector

DSP

ωωωω

1

a11

a12

b1(.)

2

a22

a23

b2(.)

3

a33

a34

b3(.)

1

a11

a12

b1(.)

1

a11

a12

b1(.)

2

a22

a23

b2(.)

2

a22

a23

b2(.)

3

a33

a34

b3(.)

3

a33

a34

b3(.)

1

a11

a12

b1(.)

1

a11

a12

b1(.)

2

a22

a23

b2(.)

2

a22

a23

b2(.)

3

a33

a34

b3(.)

3

a33

a34

b3(.)

1

a11

a12

b1(.)

1

a11

a12

b1(.)

2

a22

a23

b2(.)

2

a22

a23

b2(.)

3

a33

a34

b3(.)

3

a33

a34

b3(.)

2

a22

a23

b2(.)

2

a22

a23

b2(.)

3

a33

a34

b3(.)

3

a33

a34

b3(.)

3

a33

a34

b3(.)

3

a33

a34

b3(.)

Acoustic

“Rob”

Adaptation

HMMSearch

Word...Rob R AO BBob B AO B...

Language

Rob saysAdaptation

Acoustic units Words Language

Adaptation toenvironment/speaker

“ao”

FeatureScoring

HMM / Viterbi SearchLanguage Model Search

Xilinx XC2VP30 FPGA [13969 slices / 2448 Kb]

Utiliz: 99% of slices 45% of Block RAM~3MB DDR DRAM

50MHz clk; ~200Mb/s IO

Xilinx XC2VP30 FPGA [13969 slices / 2448 Kb]

Utiliz: 99% of slices 45% of Block RAM~3MB DDR DRAM

50MHz clk; ~200Mb/s IO

Page 13: In Silico Vox: Towards Speech Recognition in Siliconrutenbar/pdf/rutenbar-hotchips06.pdf · Audio-mining Very fast recognizers – much faster than realtime App: search large media

13

© Rob A. Rutenbar 2006 Slide 25

CarnegieMellon

FPGA Experimental ResultsAside: as far as we know, this is the most complex recognizer architecture ever fully mapped into a hardware-only form

© Rob A. Rutenbar 2006 Slide 26

CarnegieMellon

SummarySoftware is too constraining for speech recognition

Evolution of graphics chips suggests alternative: Do it in siliconCompelling performance and power reasons for silicon speech recog

Several “in silico vox” architectures in designASIC version: ~1.6X realtime for 5K-word task; 10X version in progressFPGA version: tiny design successfully running 1000-word benchmark

DirectionsExploit Berkeley BEE2 emulation engine : ~25X more FPGA resourcesDetailed architecture/performance/power tradeoffs for mobile apps

Page 14: In Silico Vox: Towards Speech Recognition in Siliconrutenbar/pdf/rutenbar-hotchips06.pdf · Audio-mining Very fast recognizers – much faster than realtime App: search large media

14

© Rob A. Rutenbar 2006 Slide 27

CarnegieMellon

Acknowledgements

Work supported byNational Science FoundationSemiconductor Research CorporationMARCO/DARPA Center for Circuit & System Solutions (C2S2)

We are grateful for the advice and speech recognition expertise shared with us by

Richard M. Stern, CMUArthur Chan, CMU