Optimal Delayed Decisions
in Decoding of Predictively
Encoded Sources
Vinay Melkote and Kenneth Rose
Signal Compression Lab
Department of Electrical and Computer Engineering
University of California, Santa Barbara
Signal Compression Lab, ECE, UCSB 2
Introduction
� Decoders simply reconstruct data, no parameter choices to make
� Can decoder delay, and thus accrued future coded data, improve current reconstruction?
� Feasible if adequate correlation exists between coded
data units
� Predictive coding systems provide the right setting:
assume an underlying correlation model for the source
Signal Compression Lab, ECE, UCSB 3
Introduction
� Predictive coding widely employed in signal compression standards:
� Motion-compensated video coding (H.264)
� Speech coding via adaptive differential pulse code
modulation (G.726, G.722)
� Continuously variable slope delta modulation (Bluetooth hands-free profile)
� Attractive for low-delay/low-complexity applications
Signal Compression Lab, ECE, UCSB 4
Introduction
� Assume a scalar first-order autoregressive (AR) source: a sequence of zero-mean random variables
that evolve as
1 1, ,n n nx x x− +⋯ ⋯
1n n nx x zρ −= +
( )Zp zi.i.d innovations withzero-mean pdf
nz nx
1zρ −
correlated source samples with inter-sample correlation coefficient ρ
Signal Compression Lab, ECE, UCSB 5
Introduction
� Consider coding with a differential pulse code modulation scheme (DPCM)
� The prediction here is
� Generally, , i.e., predictor matched to source
1 1, ,n n nx x x− +⋯ ⋯
nx ne
nxɶnxɶ
ni
n̂e
n̂x
+
-
++
Q
nxɶ
n̂en̂x
+1
az−
1az
−
DPCM Encoder
DPCM Decoder
1ˆ
n nx ax −=ɶ
a ρ=
Signal Compression Lab, ECE, UCSB 6
� DPCM encoder and decoder operate at zero delay
� At asymptotically high bit-rates:
�
� Matched predictor is optimal
�
� Hence indices are approximately i.i.d
� DPCM encoder and decoder operate at zero delay
� At asymptotically high bit-rates:
�
� Matched predictor is optimal
�
� Hence indices are approximately i.i.d
� Future indices provide no information on
� Zero-delay decoder optimal for the given encoder
Introduction
1 1ˆ
n n nx x xρ ρ− −= ≈ɶ
1 1, ,n n ni i i− +⋯ ⋯
1 2, ,n ni i+ + ⋯
1 1ˆ
n n n n n ne x x x x zρ ρ− −≈∴ = − − =
nx
1 1ˆ
n nx x− −≈
Signal Compression Lab, ECE, UCSB 7
Introduction
� At low bit-rates, prediction errors are correlated, and the indices as well
� Future indices contain information on
� Can this be exploited, by appropriate decoding delay, to
improve the reconstruction of ?
nx
nx
1 1, ,n n ni i i− +⋯ ⋯
1 2, ,n ni i+ + ⋯
Signal Compression Lab, ECE, UCSB 8
Prior work
� Interpolative DPCM (IDPCM) [Sethia & Anderson, ‘78] and
Smoothed DPCM (SDPCM) [Chang & Gibson, ‘91]
� Apply a non-causal post-filter to smooth the zero-delay
reconstructions: non-causality implemented by delay
Regular zero-
delay DPCM
reconstructions
ˆ ˆ or idpcm sdpcmn nx x
1ˆ
nx + 2ˆnx +1ˆnx − ˆn Lx +2ˆnx − ˆnx
+
n Lb +2nb +1nb +1nb −2nb − n
b
Delayed reconstructions
after filtering
Signal Compression Lab, ECE, UCSB 9
Prior work
� IDPCM and SDPCM differ in the design of the non-causal
filter
� The IDPCM design:
� Filter taps determined by minimization of an expectedmean squared error that involves statistics of
unquantized samples
� Process autocorrelation determines filter taps
� Ignores bit-rate and innovation densities
� No gains by increasing look-ahead beyond process order
Signal Compression Lab, ECE, UCSB 10
Prior work
� The SDPCM design:
� Employs a Kalman fixed-lag smoother
� The AR process provides the ‘plant’ model with source samples viewed as the ‘plant state’.
� Quantizer operation provides the ‘observation’ model, with quantized source samples ( ) perceived as ‘observations’
� The model assumes that the quantization noise is white
and uncorrelated with the source
� Kalman filter optimal for linear Gaussian model: ignores the true innovation pdf
ˆnx
Signal Compression Lab, ECE, UCSB 11
� Decoder has more information: unused by mere averaging of the zero-delay reconstructions
� For instance, decoder has information
� Smoothed reconstructions need not lie in
which is known to the decoder
0
Q
( )na i ( )nb i
( )n n
x a i+ɶ ( )n nx b i+ɶnxɶ
nx+ ɶ lay in this interval ne
lies in this interval= + n n nx x eɶ
Sub-optimalities
( , , )n nx i Qɶ
[ )( ) ( )n n n n nI x a i x b i= + +ɶ ɶ
12
Proposed method
� Estimation-theoretic approach that optimally combines the
information to obtain the - sample
delayed reconstruction of
� Recursively calculates the pdf of conditioned on all
available information
nx1 1, , , , ,n n n n Li i i i− + +⋯ ⋯
nx
L
ni n̂x ˆsdpcm
nxˆidpcm
nx
Regular DPCM
Decoder
Optimal Delayed
Decoderni n̂x
*
n̂x
IDPCM or SDPCM
Proposed method
Signal Compression Lab, ECE, UCSB 13
� Distortion criterion - mean squared error (MSE)
� The optimal estimate of at the decoder, with delay
� Intervals are an equivalent
representation of information available to the decoder
� Expectation over the conditional pdf
� Distortion criterion - mean squared error (MSE)
� The optimal estimate of at the decoder, with delay :
Optimal Delayed Decoder
nx L*
1ˆ [ | , , , , ]n n n n n Lx E x i i i− += ⋯ ⋯
1[ | , , , , ]n n n n LE x I I I− += ⋯ ⋯
1( | , , , , )n n n n Lp x I I I− +⋯ ⋯
[ )( ) ( )n n n n nI x a i x b i= + +ɶ ɶ
[Gibson & Fischer, ‘82]
Signal Compression Lab, ECE, UCSB 14
� By application of Bayes’ rule and Markov property of the process
� is the zero-delay pdf – combines all
information up to time
� weighs the zero-delay pdf to incorporate future information
({ } | )k n k n L np I x< ≤ +
( |{ } )n k k np x I ≤
( |{ } ) ({ } | )( |{ } )
( |{ } ) ({ } | )
n k k n k n k n L nn k k n L
n k k n k n k n L n n
p x I p I xp x I
p x I p I x dx
≤ < ≤ +≤ +
≤ < ≤ +
=
∫
Optimal Delayed Decoder
n
15
Forward recursion
� Recursion for the zero-delay pdf: update from time to
Say, zero-
delay pdf at
time is
known
1n −
n1n −
1nI −
1 1( |{ } )n k k np x I− ≤ −
1nx −
16
Time
n-1
1nI −
1 1( |{ } )n k k np x I− ≤ −
1nx −
n
nx
1( )Z n np x xρ −−
Forward recursion
� Recursion for the zero-delay pdf: update from time to n1n −
17
Time
n-1
1nI −
1 1( |{ } )n k k np x I− ≤ −
1nx −
n
nx
1 1 1( |{ } ) ( )n k k n Z n np x I p x xρ− ≤ − −−
Forward recursion
� Recursion for the zero-delay pdf: update from time to n1n −
18
1nI −
1 1( |{ } )n k k np x I− ≤ −
1nx −
nx
1 1 1 1 1( |{ } ) ( |{ } ) ( )n k k n n k k n Z n n np x I p x I p x x dxρ≤ − − ≤ − − −= −∫
Forward recursion
� Recursion for the zero-delay pdf: update from time to n1n −
19
1nI −
1 1( |{ } )n k k np x I− ≤ −
1nx −
nxnI
1( |{ } )
0
n k k n n np x I x I
otherwise
≤ − ∈
Forward recursion
� Recursion for the zero-delay pdf: update from time to n1n −
20
Time
n-1
1nI −
1 1( |{ } )n k k np x I− ≤ −
1nx −
n
nx
( |{ } )n k k np x I ≤
nI
1
1
( |{ } )
( |{ } )( |{ } )
0
n
n k k nn n
n k k n nn k k n
I
p x Ix I
p x I dxp x I
otherwise
≤ −
≤ −≤
∈
=
∫
Forward recursion
� Recursion for the zero-delay pdf: update from time to n1n −
Zero-delay pdf
at time n
21
1 1( | ) ( )
n L
n L n L Z n L n L n L
I
p I x p x x dxρ
+
+ + − + + − += −∫
n LI +
1n Lx + −
n Lx +
Backward recursion
� Recursion for the probability of future outcomes: step back from
time to nn L+
22
1 1( | ) ( )
n L
n L n L Z n L n L n L
I
p I x p x x dxρ
+
+ + − + + − += −∫
n LI +
1n Lx + −
n Lx +Time
n+L-1
n+L
n LI +
1n Lx + −
n Lx +
1 1( | ) ( )
n L
n L n L Z n L n L n L
I
p I x p x x dxρ
+
+ + − + + − += −∫
Backward recursion
� Recursion for the probability of future outcomes: step back from
time to nn L+
23
n LI +
1n Lx + −
n Lx +
1 1( | ) ( )
n L
n L n L Z n L n L n L
I
p I x p x x dxρ
+
+ + − + + − += −∫
Backward recursion
� Recursion for the probability of future outcomes: step back from
time to nn L+
24
n LI +
1n Lx + −
n Lx +
1 1( | ) ( )
n L
n L n L Z n L n L n L
I
p I x p x x dxρ
+
+ + − + + − += −∫
Backward recursion
� Recursion for the probability of future outcomes: step back from
time to nn L+
25
n LI +
1n Lx + −
n Lx +
1( | )n L n Lp I x+ + −
Backward recursion
� Recursion for the probability of future outcomes: step back from
time to nn L+
26
n LI +
1n Lx + −
n Lx +
1n LI + −
1 1 1( | )
0
n L n L n L n Lp I x x I
otherwise
+ + − + − + −∈
Backward recursion
� Recursion for the probability of future outcomes: step back from
time to nn L+
27
Time
n+L-1
n+L
n LI +
1n Lx + −
n Lx +
1n LI + −
n+L-2
2n Lx + −
1
1 2 1 1 2 1( , | ) ( | ) ( )
n L
n L n L n L n L n L Z n L n L n L
I
p I I x p I x p x x dxρ
+ −
+ + − + − + + − + − + − + −= −∫
Backward recursion
� Recursion for the probability of future outcomes: step back from
time to nn L+
28
Time
n+L-1
n+L
n LI +
1n Lx + −
n Lx +
1n LI + −
n+L-2
2n Lx + −
1 2( , | )n L n L n Lp I I x+ + − + −
Backward recursion
� Recursion for the probability of future outcomes: step back from
time to nn L+
Signal Compression Lab, ECE, UCSB 29
n Li +
time
n1n−2n− 1n+ n L+1n L+ −
n Lx +ɶ
n LI +
Summary
� At time n L+
Signal Compression Lab, ECE, UCSB 30
n1n−2n− 1n+ n L+1n L+ −
nI 1nI + 1n LI + − n LI +
1 1( |{ } )n l l np x I− ≤ −
Summary
� At time n L+
Signal Compression Lab, ECE, UCSB 31
n1n−2n− 1n+ n L+1n L+ −
nI 1nI + 1n LI + − n LI +
1 1( |{ } )n l l np x I− ≤ −
( |{ } )n l l np x I ≤
Summary
� At time n L+
Signal Compression Lab, ECE, UCSB 32
n1n−2n− 1n+ n L+1n L+ −
1nI + 1n LI + − n LI +
( |{ } )n l l np x I ≤
({ } | )l n l n L np I x< ≤ +
Summary
� At time n L+
Signal Compression Lab, ECE, UCSB 33
n1n−2n− 1n+ n L+1n L+ −
1nI + 1n LI + − n LI +
( |{ } )n l l np x I ≤
({ } | )l n l n L np I x< ≤ +
( |{ } )n l l n Lp x I ≤ +
Summary
� At time n L+
Signal Compression Lab, ECE, UCSB 34
n1n−2n− 1n+ n L+1n L+ −
1nI + 1n LI + − n LI +
( |{ } )n l l np x I ≤
( |{ } )n l l n Lp x I ≤ +
*ˆnx
Summary
� At time n L+
Signal Compression Lab, ECE, UCSB 35
Special case: matched predictor
� The L-step recursion for future probabilities can be simplified
� There exists function such that,
� A codebook of the functions can be
constructed
� Recursion can be replaced by codebook access with
, and translation of the function by
1ˆ
n nx xρ −=ɶ
1 , ,( )
Li ixΛ
⋯
1 , ,ˆ({ } | ) ( )
n n Lk n k n L n i i n np I x x x
+ +< ≤ += Λ −
⋯
1, ,n n Li i+ +⋯ ˆnx
1 , ,( )
Li ixΛ
⋯
Signal Compression Lab, ECE, UCSB 36
Table look-up via ?2 1, , ,n n ni i i− −⋯
Codebook-based Delayed Decoder
� Henceforth, we exclusively consider the matched predictor
� Optimal delayed estimate:
1ˆ
n nx xρ −=ɶ
*
1ˆ [ | , , , , ]n n n n n Lx E x I I I− += ⋯ ⋯
({ } | )k n k n L np I x< ≤ +( |{ } )n k k np x I ≤
Table look-up via 1, ,n n Li i+ +⋯
1 , ,ˆ( )
n n Li i n nx x
+ +Λ −
⋯
Signal Compression Lab, ECE, UCSB 37
Table look-up via ?2 1, , ,n n ni i i− −⋯
Codebook-based Delayed Decoder
� Henceforth, we exclusively consider the matched predictor
� Optimal delayed estimate:
1ˆ
n nx xρ −=ɶ
*
1ˆ [ | , , , , ]n n n n n Lx E x I I I− += ⋯ ⋯
({ } | )k n k n L np I x< ≤ +( |{ } )n k k np x I ≤
Table look-up via 1, ,n n Li i+ +⋯Growing history of indices precludes an optimal
look-up table for the zero-delay pdf
1 , ,ˆ( )
n n Li i n nx x
+ +Λ −
⋯
Signal Compression Lab, ECE, UCSB 38
A good approximation is still feasible !
Codebook-based Delayed Decoder
� Henceforth, we exclusively consider the matched predictor
� Optimal delayed estimate:
1ˆ
n nx xρ −=ɶ
*
1ˆ [ | , , , , ]n n n n n Lx E x I I I− += ⋯ ⋯
({ } | )k n k n L np I x< ≤ +( |{ } )n k k np x I ≤
Table look-up via 1, ,n n Li i+ +⋯
1 , ,ˆ( )
n n Li i n nx x
+ +Λ −
⋯
Signal Compression Lab, ECE, UCSB 39
A good approximation is still feasible !
Codebook-based Delayed Decoder
� Henceforth, we exclusively consider the matched predictor
� Optimal delayed estimate:
1ˆ
n nx xρ −=ɶ
*
1ˆ [ | , , , , ]n n n n n Lx E x I I I− += ⋯ ⋯
({ } | )k n k n L np I x< ≤ +( |{ } )n k k np x I ≤
Table look-up via 1, ,n n Li i+ +⋯
A codebook-based approximation for the optimal delayed estimate
1 , ,ˆ( )
n n Li i n nx x
+ +Λ −
⋯
Signal Compression Lab, ECE, UCSB 40
� Approximation for the zero-delay pdf:
� Let denote the stationary marginal prediction error pdf - a fixed (time invariant) pdf [Farvardin & Modestino, ’85]
� The pdf of conditioned on past indices is approximated as:
� Thus the zero-delay pdf is just:
Codebook-based Delayed Decoder
1ˆ
n n ne x xρ −= −∵
( )Ep e
2 1 1 1ˆ ˆ( | , , ) ( | ) ( )n n n n n E n np x i i p x x p x xρ ρ− − − −≈ = −⋯
nx
1
1
ˆ( )
ˆ( )( |{ } )
0
n
E n nn n
E n n nn k k n
I
p x xx I
p x x dxp x I
otherwise
ρ
ρ
−
−≤
− ∈ −
≈
∫
Signal Compression Lab, ECE, UCSB 41
� Approximate delayed estimate:
Codebook-based Delayed Decoder
*
1ˆ [ | , , ,
({ } | )
({ } |
( |{ } )
( |{ }]
),
)
k nn n
n n n n n
n k k n
n k k n
k n
L
n
L n
k n k n L n
p I xx dxx E x I I I
p x I
xx I p dp I x−
< ≤ +
< ≤
≤
+
+≤
= =∫∫
⋯ ⋯
1 , ,ˆ( )
n n Li i n nx x
+ +Λ −
⋯
1
1
ˆ( )
ˆ( )
0
n
E n nn n
E n n n
I
p x xx I
p x x dx
otherwise
ρ
ρ
−
−
− ∈ −
∫
[ )1 1ˆ ˆ( ) ( )n n n n nI x a i x b iρ ρ− −= + +1
ˆ ˆ ˆ ( )n n n nx x e iρ −= +
*
1ˆ ˆ ( , , )n n n n Lx x c i iρ − +≈ + ⋯ Look-up table/codebook
Signal Compression Lab, ECE, UCSB 42
� Numerical evaluation via and
� Alternative - a training-set based design:
� => is the estimate of
the prediction error at time given the window of indices
� Encoder is fixed: run it on a long enough training set of the
source, and obtain prediction error training set and indices
� Train delayed decoding codebook
Codebook design
( )Ep e
*
1ˆ ˆ ( , , )n n n n Lx x c i iρ − +≈ + ⋯ ( , , )n n Lc i i +⋯
n, ,n n Li i +⋯
1 , ,ˆ( )
n n Li i n nx x
+ +Λ −
⋯
Signal Compression Lab, ECE, UCSB 43
Results
� Source is first order AR
� DPCM Encoder:
� Rate: first order entropy of output indices
� Employs uniform threshold quantizer: scaled suitably to achieve different rates
� Thresholds fixed by scale-factor, reconstructions optimized iteratively similar to [Farvardin & Modestino, ’85]
� Iterative optimization also provides for codebook approach
� Predictor matched to source
( )Ep e
Signal Compression Lab, ECE, UCSB 44
0.25 0.5 0.75 1 1.25 1.5 1.75 2-1
-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
Rate (bits/sample)
SN
R g
ain
ov
er
reg
ula
r D
PC
M (
dB
)
IDPCM
SDPCMProposed Optimal Decoder
Proposed Codebook Decoder
L=3
L=1
L=3
L=1
Results
Performance comparison of competing delayed decoders for
a Gaussian source with 0.95ρ =
Performance of
zero-delay DPCM
at different bit-rates
Signal Compression Lab, ECE, UCSB 45
0.25 0.5 0.75 1 1.25 1.5 1.75 2-1
-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
Rate (bits/sample)
SN
R g
ain
ov
er
reg
ula
r D
PC
M (
dB
)
IDPCM
SDPCMProposed Optimal Decoder
Proposed Codebook Decoder
L=3
L=1
L=3
L=1
Results
Performance comparison of competing delayed decoders for
a Gaussian source with 0.95ρ =
SDPCM with lag of 3
samples, worse at
lower delays
Signal Compression Lab, ECE, UCSB 46
0.25 0.5 0.75 1 1.25 1.5 1.75 2-1
-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
Rate (bits/sample)
SN
R g
ain
ov
er
reg
ula
r D
PC
M (
dB
)
IDPCM
SDPCMProposed Optimal Decoder
Proposed Codebook Decoder
L=3
L=1
L=3
L=1
Results
Performance comparison of competing delayed decoders for
a Gaussian source with 0.95ρ =
IDPCM, delay
limited to 1 sample
automatically
Signal Compression Lab, ECE, UCSB 47
0.25 0.5 0.75 1 1.25 1.5 1.75 2-1
-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
Rate (bits/sample)
SN
R g
ain
ov
er
reg
ula
r D
PC
M (
dB
)
IDPCM
SDPCMProposed Optimal Decoder
Proposed Codebook Decoder
L=3
L=1
L=3
L=1
Results
Performance comparison of competing delayed decoders for
a Gaussian source with 0.95ρ =
Codebook-
based approach
using 1 and 3
future indices
Performance curves
for the optimal
delayed decoder
hidden beneath plots
for the codebook
approach
Signal Compression Lab, ECE, UCSB 48
0.25 0.5 0.75 1 1.25 1.5 1.75 2-1
-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
Rate (bits/sample)
SN
R g
ain
ov
er
reg
ula
r D
PC
M (
dB
)
IDPCM
SDPCMProposed Optimal Decoder
Proposed Codebook Decoder
L=3
L=1
L=3
L=1
Zoom in to see the
performance gap
between optimal
and codebook
approaches
Results
Performance comparison of competing delayed decoders for
a Gaussian source with 0.95ρ =
Signal Compression Lab, ECE, UCSB 49
0.35 0.45 0.55 0.65 0.75 0.851.35
1.4
1.45
1.5
Rate (bits/sample)
SN
R g
ain
ov
er
reg
ula
r D
PC
M (
dB
)
Proposed Optimal Decoder
Proposed Codebook Decoder
Results
Performance comparison of competing delayed decoders for
a Gaussian source with 0.95ρ =
Signal Compression Lab, ECE, UCSB 50
0.25 0.5 0.75 1 1.25 1.5 1.75 2-1
-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
Rate (bits/sample)
SN
R g
ain
ov
er
reg
ula
r D
PC
M (
dB
)
IDPCM
SDPCMProposed Optimal Decoder
Proposed Codebook Decoder
L=3
L=1
L=3
L=1
� Performance of SDPCM and IDPCM not guaranteed to be better than zero-delay DPCM
� Proposed approaches at 1 sample delay outperform SDPCM at higher delay (3) : indices contain a lot of information
� At low bit-rates increasing delay provides more gains
Results
Performance comparison of competing delayed decoders for
a Gaussian source with 0.95ρ =
Signal Compression Lab, ECE, UCSB 51
Results
0.8ρ =
0.25 0.5 0.75 1 1.25 1.5 1.75 2-1
-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
Rate (bits/sample)
SN
R g
ain
ov
er
reg
ula
r D
PC
M (
dB
)
IDPCM
SDPCM
Proposed Optimal DecoderL=3
L=1L=3
L=1
Performance comparison of competing delayed decoders
for a Gaussian source with
� Lower correlation naturally implies lesser to be gained from looking into the future
Signal Compression Lab, ECE, UCSB 52
Results
Performance comparison of competing delayed decoders for
a source with Laplacian innovations with 0.95ρ =
0.25 0.5 0.75 1 1.25 1.5 1.75 2-1
-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
1.25
1.5
Rate (bits/sample)
SN
R g
ain
ov
er
reg
ula
r D
PC
M (
dB
)
IDPCM
SDPCM
Proposed Optimal Decoder
L=3
L=1
L=1
L=3
Signal Compression Lab, ECE, UCSB 53
Other contributions
� Codebook approach trades computational complexity for memory
� Proposed an approach for codebook-size reduction via an index mapping technique with very minimal performance loss
� Optimal and codebook approaches readily extended to higher ordersources (equivalence via an appropriate first-order vector AR process)
� Index window employed in the codebook can be extended to includea few past indices: useful in the case of higher order sources
� Training-set based design, and codebook-based operation, particularly attractive for higher order sources (due to to the higher dimensionality involved)
Signal Compression Lab, ECE, UCSB 54
Summary
� Proposed an estimation-theoretic approach for optimal delayed decoding in predictive coding systems
� Combines all known information at the decoder in a recursively calculated conditional pdf
� Motivates a codebook-based delayed decoder that is nearly optimal even for modest dimensions
� Substantial performance gains compared to prior smoothing/filtering techniques
Signal Compression Lab, ECE, UCSB 55
Future directions
� Encoder optimization based on the proposed delayed decoder
� Employ delayed reconstructions for prediction via local
decoder
� Delayed decoding in adaptive predictive coding scenarios
� Application for speech/audio coding in Bluetooth systems
� Delayed decoding codebook adaptation techniques