11
1444 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 44, NO. 1 I, NOVEMBER 1996 ic Marc P. C. Fossorier, Member, IEEE Abslruct- Maximum likelihood sequence detection (MLSD) provides optimum detection for intersymbol interference (ISI) channels subject to additive white Gaussian noise (AWGN) [I]. This decoding process dynamically searches the most likely path through the intersymbol interference (ISI) channel trellis, using the Viterbi Algorithm (VA). In this paper, we exploit the structure of this trellis and, for an M-point pulse amplitude modulation (PAM) constellation, present a new scheme based on dynamic quantization. This scheme becomes extremely efficient for the single memory unit channel 1 + flD, as it achieves optimum MLSD with O(M) computations per decoding step, instead of O(M2) for the VA. Generalization to any finite length channel is also possible and conserves good computational efficiency. I. INTRODUCTION ORNEY showed that maximum likelihood sequence de- tection (MLSD) applied to the information lossless cas- cade of a matched filter followed by a sampler is the optimum detection scheme for finite impulse response channels subject to additive white Gaussian noise (AWGN) [l]. MLSD using the Viterbi Algorithm (VA) may require a substantial amount of complexity to achieve this ideal performance. In the con- ventional VA implementation, for an M-ary input alphabet, M path metric computations are performed for a given trellis state, followed by M- 1 path metric comparisons to determine the survivor sequence associated with this state. For a single- memory channel, for example, an M-state trellis diagram is employed and the VA leads to a computation complexity of In this paper, we propose a new method to select the survivor sequence associated with each trellis state for pulse amplitude modulation (PAM) signaling. The method is based on a partitioning of the complete trellis into subtrellises. A threshold-decision device or quantizer is defined for each subtrellis. At each decoding step, the decision thresholds are computed and used to determine the survivor sequence of each state, prior to the updating of the survivor metric. For the single-memory channel, M - 1 threshold values are needed; since one survivor-metric computation is performed per trellis state, the new algorithm leads to a computational complexity The paper is organized as follows. Section I1 describes the new decision rule for a PAM constellation. Channels with O(M". of O(M). Paper approved by D. Divsalar, the Editor for Coding Theory and Applica- tions of the IEEE Communications Society. Manuscript received August 27, 1994; revised February 21, 1996. This work was supported by the National Science Foundation under Grant NCR-91-15400. The author is with the Department of Electrical Engineering, University of Hawaii, Honolulu, HI 96822 USA. one unit of memory are considered specifically in Section 111, which generalizes the algorithm developed in [2] for M = 2. Some results of [3] derived for the 1 - D channel are also extended to any single-memory channel. A method for constructing the quantizer recursively is then presented. Finally, Section IV discusses the extension of the algorithm to IS1 channels of finite memory. 11. MLSD FOR A PAM CONSTELLATION We consider the monic (f~ = 1) discrete time channel of L units of memory f(D) = E,"=, fiD' and restrict the coefficients fi. to real values. The input alphabet consists of M equiprobable equally spaced values symmetric about the origin. We define A as the difference between two adjacent values of the ordered input set. The input symbols xk are independent and identically distributed. For an input sequence of x2, the corresponding output is yk = fixk-:" + nk, where the nk are independent and identically distributed (i.i.d.) zero mean Gaussian random variables with variance NO/2. The optimum decision rule for minimizing the probability of a block error is based on determining the sequence of Xk that minimize (yk - CtTO fizk-:")' [I] or equivalently, after discarding the terms yi and scaling by 1/4 L L M \ This computation can be realized recursively for each time instant k by the VA [I], [4]. The corresponding trellis diagram contains ML states X = [zk-l, . . . , xk-L], consisting of any possible L consecutive inputs prior to xk. Each state has M diverging branches and M remerging branches. By considering the M states with identical previous L - 1 input symbols denoted by A = [ ~ k - ~ , ... , zk-L+1], we can construct ML-' fully connected subtrellises of M states each, as depicted in Fig. 1. In each subtrellis indexed by A, we represent the starting states by [ZL-~], for i E [1, M] and the destination states by [xk]. No index is used to represent any of the M destination states as they appear separately in the following development. Also, in the remainder of this study, we consider the decoding at step k and further simplify the notation xipL = 5:" since the time index is implicitly fixed. The metric Mk-l(x') = Cfii r3 is associated with Publisher Item Identifier S 0090-6778(96)08680-1. each state [x:"]. Mk-l(xz) represents minimum cost over all 0090-6778/96$05.00 0 1996 IEEE

Dynamic quantization for maximum likelihood sequence detection of PAM signaling

Embed Size (px)

Citation preview

1444 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 44, NO. 1 I , NOVEMBER 1996

ic

Marc P. C. Fossorier, Member, IEEE

Abslruct- Maximum likelihood sequence detection (MLSD) provides optimum detection for intersymbol interference (ISI) channels subject to additive white Gaussian noise (AWGN) [I]. This decoding process dynamically searches the most likely path through the intersymbol interference (ISI) channel trellis, using the Viterbi Algorithm (VA). In this paper, we exploit the structure of this trellis and, for an M-point pulse amplitude modulation (PAM) constellation, present a new scheme based on dynamic quantization. This scheme becomes extremely efficient for the single memory unit channel 1 + f l D , as it achieves optimum MLSD with O ( M ) computations per decoding step, instead of O ( M 2 ) for the VA. Generalization to any finite length channel is also possible and conserves good computational efficiency.

I. INTRODUCTION

ORNEY showed that maximum likelihood sequence de- tection (MLSD) applied to the information lossless cas-

cade of a matched filter followed by a sampler is the optimum detection scheme for finite impulse response channels subject to additive white Gaussian noise (AWGN) [l]. MLSD using the Viterbi Algorithm (VA) may require a substantial amount of complexity to achieve this ideal performance. In the con- ventional VA implementation, for an M-ary input alphabet, M path metric computations are performed for a given trellis state, followed by M - 1 path metric comparisons to determine the survivor sequence associated with this state. For a single- memory channel, for example, an M-state trellis diagram is employed and the VA leads to a computation complexity of

In this paper, we propose a new method to select the survivor sequence associated with each trellis state for pulse amplitude modulation (PAM) signaling. The method is based on a partitioning of the complete trellis into subtrellises. A threshold-decision device or quantizer is defined for each subtrellis. At each decoding step, the decision thresholds are computed and used to determine the survivor sequence of each state, prior to the updating of the survivor metric. For the single-memory channel, M - 1 threshold values are needed; since one survivor-metric computation is performed per trellis state, the new algorithm leads to a computational complexity

The paper is organized as follows. Section I1 describes the new decision rule for a PAM constellation. Channels with

O(M".

of O ( M ) .

Paper approved by D. Divsalar, the Editor for Coding Theory and Applica- tions of the IEEE Communications Society. Manuscript received August 27, 1994; revised February 21, 1996. This work was supported by the National Science Foundation under Grant NCR-91-15400.

The author is with the Department of Electrical Engineering, University of Hawaii, Honolulu, HI 96822 USA.

one unit of memory are considered specifically in Section 111, which generalizes the algorithm developed in [2] for M = 2. Some results of [3] derived for the 1 - D channel are also extended to any single-memory channel. A method for constructing the quantizer recursively is then presented. Finally, Section IV discusses the extension of the algorithm to IS1 channels of finite memory.

11. MLSD FOR A PAM CONSTELLATION We consider the monic ( f ~ = 1) discrete time channel

of L units of memory f ( D ) = E,"=, fiD' and restrict the coefficients fi. to real values. The input alphabet consists of M equiprobable equally spaced values symmetric about the origin. We define A as the difference between two adjacent values of the ordered input set. The input symbols xk are independent and identically distributed. For an input sequence of x2, the corresponding output is y k = fixk-:" + n k ,

where the n k are independent and identically distributed (i.i.d.) zero mean Gaussian random variables with variance NO/2.

The optimum decision rule for minimizing the probability of a block error is based on determining the sequence of X k

that minimize (yk - CtTO f i z k - : " ) ' [I] or equivalently, after discarding the terms y i and scaling by 1/4

L

L

M \

This computation can be realized recursively for each time instant k by the VA [I], [4]. The corresponding trellis diagram contains M L states X = [zk-l, . . . , x k - L ] , consisting of any possible L consecutive inputs prior to x k . Each state has M diverging branches and M remerging branches. By considering the M states with identical previous L - 1 input symbols denoted by A = [ ~ k - ~ , . . . , z k - L + 1 ] , we can construct ML-' fully connected subtrellises of M states each, as depicted in Fig. 1. In each subtrellis indexed by A, we represent the starting states by [ Z L - ~ ] , for i E [1, M] and the destination states by [ x k ] . No index is used to represent any of the M destination states as they appear separately in the following development. Also, in the remainder of this study, we consider the decoding at step k and further simplify the notation x i p L = 5:" since the time index is implicitly fixed. The metric Mk-l(x') = C f i i r3 is associated with

Publisher Item Identifier S 0090-6778(96)08680-1. each state [x:"]. Mk-l(xz) represents minimum cost over all

0090-6778/96$05.00 0 1996 IEEE

FOSSORIER DYNAMIC QUANTIZATION FOR MAXIMUM LIKELIHOOD SEQUENCE DETECTION

State x t., = M - 1

[ A ; M - l ]

State x t, = M - 3

[ A ; M - 3 ]

State x r L = - ( M - 1 )

[ A ; - ( M - 111

Fig. 1. Trellis decomposition ans labeling for PAM, subtrellis "A' ( f ~ > 0).

1445

State x = M - 1

[ M - l ; A ]

State x k= M - 3

[ M - 3 ; A ]

State x k = - ( M - 3 )

[ - ( M - 3 ) ; A l

State x k= - ( M - 1 )

[ - ( M - 1 ) ; A I

costs associated with trellis paths terminating in state [z2] at step I; - 1. The new metric at state [zk] is the minimum metric value out of M metric candidates, where each metric candidate consists of the summation of Mk-l(x') and the cost associated with the trellis branch from state [d] to state [zk], for i E [I, MI.

For each subtrellis, we define D k ( z k , x 2 , d) as the differ- ence between the two metric candidates from the states [d] and [ 2 3 ] , i # J to the state [xk]. The difference in metrics is given by

Dk (a, x2, X J ) = M k - 1 ( X 2 ) + $[yk - zk - S(A) - ~ L Z ' ] ~

- {Mk-1(zJ) + [yk - X ~ C - S ( A ) - ~ L Z ' ] ~ } (2)

(3)

= Mk-1(X2) - Mk-1 (d) - [yk - xk - S(A) - pJ]

where a2J = f ~ ( d - x J ) / 2 , p"J = f ~ ( z ' + xJ ) /2 , and S ( A ) = Ctc: f u x k - u is a characteristic of the subtrellis A considered. The difference in metrics D k ( x k , d, xJ) indicates which of the previous candidate states leading to state [xk] is more likely, either [x2 ] or [sJ]. The branch coming from 12'1 is selected if D k ( z k , x2, xJ) < 0 and the branch coming from [zJ] is selected if Dk(xk , x2, x J ) > 0, with ties being decided arbitrarily. We define the threshold associated with the two states [z'] and [x3] as

The difference in metrics can be expressed in terms of this threshold by

(5 ) D k ( Z k , 2, X J ) = a 2 J [ Z k - Tk(Z2, .")I.

To ensure a?J > 0 for i < j , without loss of generality, we label the states in the trellis such that f ~ z ' > fLxJ for i < j . Therefore, for a PAM constellation with adjacent values separated by A, x2+l = x2 - SLA where SL = 1 if f~ > 0 and S L = -1 if fL < 0.

Immediate properties follow from the definitions of Dk ( ) and T k ( ). First, we observe the symmetries Tk(a, b ) = T k ( b , a ) and D k ( z k , a , b ) = - D k ( z k , b, a ) . We also readily show that for any a, b, and c

Thus, for a < b < c, the threshold associated with a and c is between the threshold associated with a and b and the threshold associated with b and c. In particular, for three consecutive labeled signals of the PAM constellation a, b = a + A, and c = a + 2 A

Tk(a, a + 2A) = T [ T k ( a , a + A) + Tk(a + A, a + 2A)l. (7)

Any threshold can be computed from the M - 1 values T k ( a , a + A) using (6).

Equation (5) shows that comparing the difference in metric Dk (xk, z', xJ ) to zero is equivalent to comparing the thresh- old Tk(s2, xJ) associated with the pair of corresponding states to the input symbol xk. Since xk belongs to the signaling set,

1446 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 44, NO. 11, NOVEMBER 1996

Mk.1 (dl Fig. 2. Labeling for 1 + f l D channel

this suggests a quantizer with thresholds Tk(x2, xJ) applied to the input constellation. For the particular 1 * D channel with bipolar inputs, this quantizer becomes a single threshold, as described in [2]. Note that this is a time variant quantizer with a fixed input, not the usual time invariant quantizer with a variable input. At each destination state [zk] of the considered subtrellis, we have to select one of the M merging branches. For i < j , the decision rule is obtained from (5) as: "if xk < Tk(x2, x J ) (or equivalently, Dk(xk, z', x J ) < O), then select the branch from state [Y]; otherwise, select the branch from state [zJ]." For each subtrellis A, the basic algorithm becomes the following.

1) Compute Tk(x2, x J ) according to (2.4). 2) Compare Tk(xz, x J ) with each input value (i.e., xk). 3) For each state [xk] , select the surviving branch X' + Xk,

compute and store the new metric M A = Mk- l (x z ) + For each subtrellis, the associated quantizer can be designed

independently of the others. The advantage of the new method comes from the fact that per subtrellis, the computation of not needed path metrics is avoided. This is achieved by first discarding with threshold comparisons M - 1 survivor- sequence candidates per trellis state, and then computing the path metric for the surviving path only. Furthermore, one set of thresholds is used for the states in a subtrellis. Since it is not known a priori which state metrics will be pairwise compared, it may appear that ( y ) thresholds are needed. However, we show in the following section that M - 1 thresholds are sufficient to completely determine the branch survivors for the 1 + f l D channel.

L 2 (1/4) . [ ( Y k - L o f J - z ) - Y21.

111. PAM CONSTELLATION ON 1 + f l D CHANNEL

A. Dynamic Quantization versus Conventional VA

We present a theorem which shows that not all the metrics computed by the VA need to be considered for selecting the survivors. As depicted in Fig. 2, we assume that the last branch constituting the minimum path leading to state b] with associated metric Mk(p) , starts at state [b] with associated metric Mk-l(b). Based on the knowledge of Mk(p) , we consider the metric computation at state [ q ] .

Theorem 1: Given that the last branch of the most likely path to state b] starts at state [b] , a necessary condition to obtain m(z) 5 m(b) at state [q] is, f l ( z - b ) ( q - p ) 5 0, where m(i) represents the metric candidate from state [2] at state [q].

Proo$ Mk(p) is the metric at state b] with surviving branch from state [b] implies

4Mk-i(b) + (Yrc - P - f ~ b ) ~ 5 dMk-i(z) 4- (Yk - P - flil2

or equivalently 4hfk-l(b) 5 4Mk-1(2) + f ; ( i2 - b 2 )

- 2f l ( i - b)(Yrc -PI. (8)

Similarly, m(z) 5 m(b) implies

-4Mk-l(b) 5 - 4Mk-l(i) - f ; ( i2 - b 2 )

+ 2 f l ( i - b)(Yk - 4 ) . (9)

the inequality. 0 The proof is achieved by summing (8) and (9) which maintains

Theorem 1 shows that from the knowledge of one metric at step k , only certain states have to be considered for the remaining metric computations of step k . This fact is ignored by the VA which considers each trellis state independently, resulting in many unnecessary computations. On the other hand, our new decoding approach implicitly exploits this fact since a quantizer can discard several states based on one comparison. Surprisingly, we notice that Theorem 1 does not depend on the noisy received sequence of yk and provides a time-invariant condition. Finally, this theorem also shows that for f l > 0, there must always be crossings in the extensions of the survivor sequences (unless, if trivially, two survivors sequences are branching out from the same previous state). Fig. 3 constructed for f1 = 0.5 is a good example of this property. Similarly, for f l < 0, there cannot be crossings in the extensions of the survivor sequences. This property was also observed in [3] for the 1 - D channel.

At decoding step IC, we define the nth antecedent to a state [b] as the state extended at the decoding step k - n and belonging to the most likely path terminating in [b]. Then by applying a chain argument to Theorem 1, an immediate corollary follows.

Corollary 1: The nth antecedents [ b k P n ] and [ckPn] of the surviving paths to the states [b] and [c] satisfy ( - f l ) " ( b k - , -

Corollary 1 shows that the paths terminating in [b] and [c] either run without crossing for f l < 0, or cross for all n for

C E L m ) ( b - c) 2 0

f l > 0.

B. Quantizer Description

The general decision rule as presented in Section I1 requires the computations of the ( y ) thresholds Tk(zz, x J ) to entirely determine the quantizer associated with each subtrellis. In this section, we show that the quantizer composed of the M - 1 thresholds Tk(zz, z2+l), z E [l, M - 11 is sufficient for the branch selection of a single unit memory channel ( L = I). Thus, as discussed in Section III-A, dynamic quantization

FOSSORIER DYNAMIC QUANTIZATION FOR MAXIMUM LIKELIHOOD SEQUENCE DETECTION 1441

(4 .5 ,2 .5 ,0 .5 , -1 .5) [~1=3] [x' 1 paths to [ b k ] and [ c k ] are unique for single memory channels with I f l I # 1, since the catastrophic channel with I f l ] = 1 may have other paths with equal minimum cost.

The proof of this theorem is given in Appendix A. Theorem 2 shows that each pair of symbols ( b k - l , c k - l ) belonging to the most likely paths terminating in [ b k ] and [bk + A] differ at most by A. In [ 3 ] , it is shown that for the 1 - D channel, if the most likely and second most likely paths are adjacent, then they will remain adjacent for the remaining operations. As a result, Theorem 2 can be viewed as a generalization of

M I = o . ~ ~ ~ t = o . 2 6 5 6 2 5 ~ 1 = 0 . 2 6 5 8 5 ~1=6 .91275 this result to any pair of adjacent states and to any 1 + f l D

[X 3

1x31

[x4 1

( 3.5, 1.5, -0.5, -2.5 ) [x2= 11

( 2.5, 0.5, -1.5, -3.5 1 [x3= -11

( 1.5, -0.5, -2.5, -4.5 )[x4= -31

channel. Corollary 2: Two consecutive states [ b k ] and [ c k ] = [bk f

A] in the trellis have two antecedent states whose difference verifies either Ibk-1 - ck-11 = 0 or Ibk-1 - q - 1 1 = A.

Corollary 2, which is simply the application of Theorem 2 at step k - 1, will be used in the proof of the next theorem.

M44.0804 M4=3 940625 M43.35585 M4=o.28275 This corollary is also important when the metric computations y = 2.54 y = 2.47 y = 4.53 y = -1.24

Fig. 3. Decoding example for the 1 + 0 . 5 0 response channel.

exploits Theorem 1 and requires less computations than the VA which computes M 2 metric candidates. This result is important as it not only minimizes the number of threshold computations per step, but also guarantees the same quantizer structure for each step.

We first illustrate this result by a simple example. Let us consider the transmission of three signal points a < b < c over f ( D ) = 1 +f i l l ( f 1 < 0). If we assume T k ( a , b ) < T k ( b , c) for all k , then (6) guarantees T k ( a , b ) 5 T k ( a , e ) 5 T k ( b , e ) . If xk < T k ( a , b ) , then xk < T k ( a , e ) and the branch from state [a] is selected. Symmetrically, if Xk 2 T k ( b , e ) , then also Xk 2 Tk(a , e ) and the branch from state [e] is selected. Finally, if xk 2 Tk(a , b ) and X k < T k ( b , e ) , then the branch from state [b] is selected. The position of Xk with respect to Tk(a , e ) in the interval [Tk (a , b ) , T k ( b , e ) ] is irrelevant to the decision process. For this particular example, the computation of Tk(a , b ) and T k ( b , c) is therefore sufficient, if T k ( a , b ) < T k ( b , e ) is satisfied.

Generalization of this example to any M-point constellation follows in a similar fashion. Using (6), the necessary condition to compute M - 1 thresholds only becomes T~(x', z'+l) 5 Tk(x2+l, x ' + ~ ) for dl i E [I, M - 11 or equivalently, for an M-point PAM constellation with signal separation A

M&1(X2) + Mk-1(2'+2) - 2 M 4 X 2 + 1 ) 2 -- (10)

The main result that the quantizer for any single memory channel has M - 1 thresholds is shown by verifying (10). First a theorem and an immediate corollary are necessary to develop the relationship between the metrics of the most likely paths to three consecutive states, Le., M k - l ( ~ ' ) , M ~ - I ( z ' + ~ ) , and

Theorem 2: In the trellis of a single memory channel, the most likely paths starting from a common state [bo] and ending at the states [ b k ] and [ e k ] with c k = b k + A, satisfy 1bk-l - ck-ll = A for 0 5 I < k . In addition, the most likely

2

M&1(2'+2).

are realized in serial with the algorithm of [1]. The choice of the metric for the first state of the trellis requires M metric computations and M - 1 comparisons, but the remaining M - 1 other states just require two metric computations and one comparison.

Theorem 3 Main Result: At each decoding step, the quan- tizer for an M-point PAM constellation is composed of the M - 1 thresholds Tk(z', z2+l), for i E [l, M - 11.

The proof is presented in Appendix B. Theorem 3 guaran- tees that at each decoding step, the quantizer consists of the same M - 1 time variant thresholds. Since the quantizer struc- ture is time invariant, an adaptive updating of the thresholds is possible, as shown in the following section.

C. Quantizer Update

At each step k , the M - 1 thresholds Tk(zz, 2"') can be directly obtained from the metrics M k - 1 ( ~ ' ) and Mk-l(z'+')

using (4). These two metrics have been computed after the branch selection of the previous step k - 1, as described in Section 11. We now present a method which directly evaluates the quantizer at step k from the thresholds computed at step k - 1.

At step k - 1, the two adjacent trellis states [ a k - l ] and [bk-l] = [ a k - ~ - S I A ] have associated metrics Mk-l(u) and Mk-1 ( b ) . Their corresponding antecedent states are respec- tively [ a k - 2 ] with associated metric Mk-2(a) and [ b k P 2 ] = [ak - z + K I A ] with associated metric M k - ~ ( b ) , where K1 has been defined in Appendix A. Then (4) provides

such that, after expressing the last contribution to Mk-l(a)

1448 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 44, NO. 11, NOVEMBER 1996

with Tk(i, i ) = 0 implicitly assumed. Corollary 2 implies s lK1 E (0, l} which guarantees Tk-l(ak-2, bk-2 ) has al- ready been computed. Finally, the initial computation satisfies Tl(a0, a0 - slA) = - f lu0 + f l (s lA/2) + yl. Particular attention should be brought to the steps following the ini- tial computation. It is not guaranteed that the initial branch selection satisfies slK1 E (0, l}, since the proof of Theorem 2 assumes that any two minimum paths to consecutive states diverge from a common past state, which may not exist yet. Consequently, we process the initial steps using (4) and test each value slK1. As soon as each s1K1 E (0, l} (11) becomes valid. The next example illustrates this point in details. The computation of each new threshold requires only four adders and one constant multiplier for y l ~ - ~ , as shown in (11). The two values -(fl + l/fl - s1Kl)ak-1 + ( f l + l/fl - 2s lKl)s lA/2 evaluated for slK1 = 0 and slK1 = 1 can be stored for each threshold. This method requires only M - 1 threshold computations per step to determine the whole quantizer and, unlike the cumulative metric computations, does not suffer from overflow problems.

D. Example We consider the 4-PAM constellation {x' = 3, x2 =

1, x3 = -1, x4 = -3}, the IS1 channel with response 1 + 0.50, whose trellis is given in Fig. 3, and the received sequence = [2.54, 2.47, 4.53, -1.24, . . .]. We provide in details the first 4 steps processed by the adaptive quantizer, and give in Fig. 3 the equivalent decoding based on the VA and the cumulative metric computations.

At Step 1, we compute the initial thresholds T12 = 1.54, T23 = 2.54, and T34 = 3.54, where TzJ represents the threshold associated with states xz and xJ. Since x4 < x3 < x2 < T12 and T23 < x1 < T34, we select the branch from x1 for the destination states x4, x3, and x2, and the branch from x3 for the destination state xl. As discussed at the end of the previous section, we notice that for the destination states x1 and x2, slK1 = 2 for the next decoding step. We need, therefore, to compute the four metrics corresponding to our branch selection. If Mz denotes the metric at the destination state x', we obtain M I = M2 = 0.0004,

At Step 2, by using (4), we compute T12 = 0 - 1 + T34 = -2 9 (-3.04) + 1 + 2.47 = 9.55, which implies x4 < x3 < x2 < T12 and T12 < x1 < T23. We select the branch from x1 for the destination states x4, x3, and x2, and the branch from x2 for the destination state xl . Note that s1 K1 = 1 corresponds to changing of decision region between two consecutive destination states while for slK1 = 0, we

Ad3 = 1.0404, and M4 = 4.0804.

2.47 = 1.47, T23 = - 2 . (-1.04) - 0 + 2.47 4.55, and

remain in the same decision region and therefore select the same branch. Based on this fact, the values slK1 to be used in the next step are easily updated during the branch selection process. Since all slK1 E (0, I}, we can now use the recursive implementation.

Before starting Step 3 and the inductive process, we eval- uate for each threshold Tz3 the two corresponding constants CzJ(siK1) -(fi + 1/fi - siKi)zz + ( f 1 + l/fl - 2slKl)s lA/2. We obtain C12(0) = -5, C12(1) = -4, C23(0) = 0, C23(l) = -1, C34(0) = 5 , and C34(1) = 2. From ( l l ) , we then compute T12 = -1.47 - 1 + Cl,(l) + 2 .

andT34 = 0-3+C34(0)+2.2.47+4.53 = 11.47. Therefore, since x4 < x3 < x2 < TIZ, we select the branch from x1 for the destination states x4, x3, and x2. For the destination state x l , since x1 = T12, we can choose either the branch from x1 or the branch from x2. We verify in Fig. 3 that both branches provide the same cumulative metric at state 21.

Finally, at Step 4, we compute T12 = 0 - 3 + C n ( 0 ) + 2 .

T23 = 0 - 3 + C23(0) + 2 . 4.53 - 1.24 = 4.82, and T34 = 0 - 3 + C34(0) + 2 . 4.53 - 1.24 = 9.82. We obtain x4 < x3 < T12 < x2 < x1 < T23, so that we select the branch from x1 for the destination states x4 and x3 and the branch from x2 for the destination states x2 and xl . The decoding continues the same way without computing any cumulative metric. When the number of steps equals the chosen window size, the decoder releases its first decision as for the VA. However, since the metrics are no longer available, no information is provided about the most likely path. We need to ensure that the window size chosen is large enough to have a unique surviving path remaining when a decision is made. For this channel, we observe from simulations that a window size of ten is sufficient, even at low SNR's.

=

2.47t4.53 = 3.0, T23 = 0-3+C23(0)+2.2.47+4.53 = 6.47,

4.53 - 1.24 = -0.18 (= -3-1 + C12(l) + 2 4.53 - 1.24),

IV. PAM CONSTELLATION ON FINITE LENGTH Is1 CHANNEL

A. Trellis Structure We first show that Theorem 1 can be generalized to the

subtrellis structure described in Fig. 1. Therefore, information not considered by the VA is also available for the branch selection process.

Generalized: For each subtrellis A, given that the last branch of the most likely path to state [13k] starts at state [ b k - ~ ] , a necessary condition to obtain m(i) 5 m(b) at state [ q k ] is, f ~ ( i k - ~ - b k - ~ ) ( q k - p k ) 5 0, where m(i) represents the metric candidate value from state [ i k - ~ ] at state [ q k ] .

Proofi The proof is a straightforward generalization of 0

Theorem 1

the proof of Theorem 1.

B. Generalized Dynamic Quantization for Finite Length ISI Channels

In Appendix B, we showed that (10) is equivalent to the general definition of a convex-U function f ( T ) [6, p. 841. Also, if the minimum of this function is a valid signaling sequence, then (10) is satisfied. However, since the discrete signaling constellation is not a convex set, this condition is

FOSSORIER DYNAMIC QUANTIZATION FOR MAXIMUM LIKELIHOOD SEQUENCE DETECTION 1449

not guaranteed to be satisfied. For the 1 + f l D channels, a sufficient condition was always satisfied, as shown in the sec- ond part of Appendix B. The generalization of this sufficient condition to finite length IS1 channels is given in Appendix C. Unfortunately, this generalized condition is no longer satisfied for any IS1 channels with L 2 2.

Based on Appendix C, it is possible to define a class of finite length IS1 channels for which the results of Appendix B are generalized in a straightforward way. These channels are characterized by E,"=, ( f z fz+,)6, 2 0, for all possible combinations of integers 6, such that 60 = 1 and 16, I 5 2, J # 0. For example, any 1 + f l D channel as well as any 1 + f l D + f2D2 channel with f 2 < 0 belong to this class. However, as L increases many IS1 channels, and in particular partial-response channels of the type (1 + D ) (1 - D)P, p 2 2 no longer satisfy the corresponding sufficient condition.

Indeed if (IO) happens to fail, dynamic quantization be- comes suboptimum. Nevertheless, it should still perform close to MLSD since the most likely signaling sequence should be in the neighborhood of the unconstrained optimum sequence 6 defined in Appendix A. The results of [5] strengthen this conjecture. Also, for all IS1 channels we simulated, both dynamic quantization and conventional VA decoding provided MLSD. However, for some cases the sufficient condition of Appendix C was no longer satisfied. We leave as an open problem,

Open Problem: Prove (or disprove) that dynamic quantiza- tion provides MLSD for any finite length IS1 channel with PAM signaling.

C. Quantizer Update

In this section, we present an inductive method which updates each subtrellis quantizer without computing the state metrics. This method is not as simple as the one developed in Section 111-C since the destination states of a subtrellis at step k belong to different subtrellises at step k + 1. In Appendix D, based on Fig. 8 we show that, for the subtrellis A = [a i , ' " , ~ ~ - 1 1

where

and, for j E (1, L - 1}, RF1l" ' luL1(j + 1) is recursively computed from

( j 1 [a2,"',a3+l,"',uL,aL+iI ' k - 1

f3" S L A

f L 2 .

Equation (12) expresses that two adders are required to obtain each threshold value, since the last part of the equation is constant for the state considered and can be preprocessed and stored.

After the branch selection from the quantizer, if we choose to update the metric at each state, M L metric computations are realized. In Appendix D, the number of Rkl' '""](j), j E [l, L] is evaluated. We show that an inductive method requires L ( M - I) ~ h - 1 computations of R ~ I > '"Ll(3)'s and therefore becomes more costly than the metric updates by approximatively a factor of L. The inductive quantizer construction based on these values is however very fast as only two adders are required.

In Appendix D, L ( M - l ) M L - l - M L + 1 independent equations relating the RL" l a L 1 ( j ) ' s with few adders are determined, such that only ( M L - l ) RF1' ' a L 1 ( j ) ' ~ remain to be computed using (14). The choice of these ( M L - 1) values and the order in which the remaining R!" '""I(j)'s are evaluated using the dependency equations of Appendix D have to be fixed initially. As for the single memory channel case, we finally mention that the recursive update of the quantizer requires at least L initial steps for the trellis to reach its general structure. During these steps, we need to evaluate the metrics after the branch selection.

D. Example We consider the 4-PAM constellation {x' = 3, x2 =

1, x3 = -1, x4 = -3) and the IS1 channel with response 1 + 0.50 + 0.2D2, whose trellis is given in Fig. 4. We decompose this trellis into four subtrellises A with associated characteristic S(A) = 0.5A E (f1.5, h0.5). At step k , we receive yk = 4.73 and Table I provides the 24 updates of R k - l ' s , corresponding to the initial state metrics of Fig. 5.

For subtrellis A = 3, we compute, using (12), T f 1 ( 3 , 1) = - R[3>3](2) + 4.73 - 1.5 - 3 . 0.2 + 0.2 = 2.165,

Tk (1, -1) - Ti3 = ' I (2 ) + 4.73 - 1.5 - 1 .0 .2 + 0.2 = 4.185 andTF1(-l, -3) = Ti4 = R[3,-1](2)+4.73-1.5+l. 0.2 + 0.2 = 15.505. We compare x4 5 x 3 5 x2 5 T:2 5 xl, which implies the selection of the branches from x1 to x4, x3, x 2 and from x2 to x l , as depicted in Fig. 5. For the three other subtrellises, we compute Tfi > d, so that all branches are

Tjl:] - -

15.1.3.1. 1.1,-0.9)

14.1, 2.1, 0.1, -1.9)

13.1, 1.1,-0.9,-2.9)

(2.1, 0.1, -1.9,-3.91

(4.7, 2.7,0.7, -1.3)

13.7, 1.7, -0.3,-2.3)

(2.7, 0.7, -1.3,-3.3)

(1.7, -0.3, -2.3,-4.3)

(4.3,2.3, 0.3, -1.7)

(3.3, 1.3, -0.7,-2.7)

[2.3,0.3, -1.7,-3.7)

(1.3, -0.7, -2.7,-4.7)

(3.9, 1.9,-0.1,-2.1]

(2.9,0.9, -1.1,-3.11

(1.9,-0.1, -2.1,-4.1)

(0.9,-1.1,-3.1;S.l)

J

3 ’ 1 -1 -3

Fig. 4. Trellis labeling for the 1 + 0 . 5 0 + 0.2D2 response channel with 4-PAM.

3 1 -1 -3 P1 P 2 PI P 2 Pl Pz PI P2

-2.73 -0.665 -1.93 0.955 1.85 11.875 6.85 (-) 4.01 0.135 6.85 4.735 11.85 16.875 17.15 (-)

7.395 (-) 14.735 (-) 26.875 (-) (-) (-)

-

12.43 2.975 16.85 9.735 21.85 21.875 26.85 (-)

-

TABLE I Rk;:] ( p i ) COMPUTED AT THE PREVIOUS STEP, E [I, 21

selected from state d of the corresponding subtrellises. These transitions are represented in Fig. 5. At that stage, we can either choose to update the 16 state metrics correspondmg to these transitions, or update the 24 R k ’ s . We now describe the updating of these values. Using (13), we compute RF’ 31(1) = -TF1 (1, 3) +l+ (4.73-3- 1.5 -0.2.1)/0.2+1/0.2 = 3.985 and RL’”(1) = 0+(4.73-3-0.5-0.2.3)/0.2+1/0.2 = 8.15. The ten remaining R k (1) values follow the same computations as RF’ ‘](I) since the initial states are equal. We observe that Rk( 1)’s are easily computed as the quantization process provides K L + ~ and aL+1 without any additional operation. Similarly, (14) provides RF’ 31 ( 2 ) = RF::] (1) - 7’;’ (1, 3 ) + For the remaining eleven &(a ) , K L + ~ = 0, and we obtain the form RF’ (2) = ( 1) + 0.5. (4.73 - 3 - 0.5 - 0.2.3) /0.2 + 0.52/0.2 = 6.835. However, after the three computations of RF’”](2), II: E ( 3 , 1, --I}, we can use the dependency relations depicted in Fig. 6 for the nine remaining R k ( 2 ) ,

Rf”I(1) = 2.595.

3 + 0.5. (4 73 - 3 - 1 5 - 0 2. 1)/0.2 + 0 52/0 2 = -1 57

IC so that for instance, Rr’ 31 ( 2 ) = - R, [3 ’ 31 (1) + d3’ 31 ( 2 ) +

V. CONCLUSION

A new method for selecting the branch remerging at each trellis state for MLSD of a PAM constellation has been me-

1450 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL 44, NO 11, NOVEMBER 1996

I ing optimality, dynamic quantization provides a substantial

(3.357) a

(7.727)

(2.992) e

(4.362) @

(7.773) . (13.102). ’. (21.920225)

Fig. 5. Decoding example for the 1 + 0.5 + 0.20’ response channel.

Fig. 6. RI, Dependency for L = 2 and M = 4.

sented. This method is based on a partitioning of the complete trellis into subtrellises. For each subtrellis, a quantizer working on the signaling constellation is dynamically updated at each step. Each quantizer can be computed independently, allowing a parallel implementation of this new scheme.

We first considered single unit memory IS1 channels and showed that the quantizer is indeed time variant, but its structure remains time invariant. This allows a recursive con- struction of the quantizer and removes the overflow problem due to the cumulative metric computations. For each subtrellis, this new method requires M - 1 threshold evaluations while M 2 metric candidates are computed by the VA. While preserv-

FOSSORIER: DYNAMIC QUANTIZATION FOR MAXIMUM LIKELIHOOD SEQUENCE DETECTION 1451

. . . . . L h ........................ m!& (c) %I_ 1

k.......x...... :-.

.....................

[bl + y AI [b, - K, AI Fig. 7. Trellis representation from step 0 to step IC for f l > 0.

gain in computations. We then defined a subclass of IS1 channels for which the quantizer composed of M - 1 thresholds provides MLSD. However, to our disappointment, we left as an open problem the generalization of this result to all finite length IS1 channels. We also presented a general method to recursively update the M - 1 thresholds composing each subtrellis quantizer. This algorithm conserves O ( M ) computa- tions per subtrellis. A generalization of these results to vector quantization and multi-dimensional signal constellations is discussed in [7].

APPENDIX A PROOF OF THEOREM 2

We refer to Fig. 7 for the trellis status used in this proof and represent each discrete sequence of x, by its associated vector Z. - At the decoding step k , we consider the most likely paths b and E terminating in states [ b k ] and [ c k ] , respectively. We choose i = 0 at the diverging state between the two paths such

For 0 I 1 < k , we represent ck-1 = bk-l 5 Kk-lA for some integer Kk-l > 0, where the sign f is determined from Theorem 1, and define KO = 0. Although the notation * may appear ambiguous, it allows to describe both cases f l < 0 and fl > 0 simultaneously and is irrelevant to the final result. The length k of the most likely paths is at least two as k = 1 trivially verifies Theorem 2. As depicted in Fig. 7, we define the following.

1) Mk(b) and Mk(c) as the survivor metrics at states [ b k ]

that ck-l = bk-l for 1 2 k and ck - l # bk-l for 0 5 1 < k .

and [ck] so that

1=1

2) m k ( b ) as the cost of the path diverging from the most likely path 6 at state [bo], going through each state

(bk- l f (Kl - K,)A) and remerging with the most likely path 6 at state [ b k ] .

W ( b ) = Mo@) + ( Y l - bl - f l b 0

3)

- (fl)k-l(Kl - K k ) A ) 2

k

+ (Y l - bl - f lbl-1 - (fl)k-w - Kn)A 1=2

- (fl)"-2flfl(Kl-1 - Kk)A)2.

m k ( c ) as the cost of the path diverging from the most likely path C at state [bo], going through each state [bk-l f KkA] and remerging with the most likely path 2 at state [bk z t KkA].

% ( e ) =Mo(b) + (Y1 - bl - flb0 - (k1)k--'Kka)2 k

+ - (f 1) '-'+' flKkA)2.

(Yl - bl - f lbl-1 - (fl)"-IKkA 1=2

The signs f in the last two constructed sequences are chosen the same as in ck- l = bk-l * KlA, as depicted in Fig. 7 for f1 > 0. For f i < 0, we observe no alternation of sign in these constructed sequences, which is justified by the last comment following Theorem 1. Note that these two last paths are constructed such that each state is at the same distance Ibk - ckJ = KkA from 6 and E , respectively. These definitions implicitly assume that states [bk--l & (Kl - Kk)A] and [bk-l k KkA] exist. After developing these four metric values, we compute

Mk(b) + Mk(C) - % ( b ) - =

L IC-1 1

2=2

For KI, = 1, we readily verify that all the states in the pair of constructed sequences are valid input symbols. Since Mk(b) and Mk(c) are the costs of the most likely paths to states [ b k ]

1452 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 44, NO. 11, NOVEMBER 1996

and [ c k ] respectively, we have Mk(b) + Mk(c) - m k ( b ) - mk(c) 5 0. For Kk = 1, (15) implies that the only valid choice Kl > 0 is Kl = KI, = 1, except for l f l l = 1 and 1 2 2 , which corresponds to the catastrophic behavior of the Viterbi decoder. This particular case only occurs when m k ( b ) = Mk(b) and m k ( c ) = Mk(c). The choice of the most likely paths becomes arbitrary such that we can keep the solution verifying Theorem 2.

APPENDIX B PROOF OF THEOREM 3

Substituting d = x2, c = zzsl, and b = x2+2 in Fig. 2, we compute

[m(b) - M k - i ( b ) ] + [ m ( d ) - M k - l ( d ) ] f?A2

- 2[m(c) - M k - l ( C ) l = ~

2

- so that (10) gives m ( z 2 ) + m ( ~ 2 + 2 ) - 2 m ( ~ z + 1 ) 2 0. Defining d as the input sequence providing m(zZ) with dk-1 = d = 2' and dk = q, we obtain m(zz) = d 2 ( y , F d ) / 4 = f (d ) , where F represents the channel linear mapping matrix [5]. With equivalent definitions for C and b, we need to show

(17)

Since f ( Z ) is convex-U over the real numbers, we have ( l / 2 ) . [f (b) + f (d)] 2 f (5) for 5 = (b + d ) / 2 . Despite terminating in state [q] through state [c], E does not correspond to a valid transmitted sequence in general and a nearest neighbor solution is not sufficient. We easily show, for 0 < X < 1 and two input sequences r, and ij

i f (6) + i f ( d ) L f (q.

f ( X F + (1 - XIIT) = X f ( r , ) + (1 - X)f(q)

(18) X ( l - X ) d 2 ( F p , Fq)

4 -

which implies

We define p = (6 + $/2 + E and 7 = (b + d ) / 2 - E . With a proper choice of -i, p, and 7 represent two valid candidate sequences for C since each symbol of the sequence (b + d ) / 2 is either an interger, or half an integer. Since Cr = p / 2 + 7/2, (18) implies

f (4 = i f (PI + if(r) - $llF#. (20)

As C minimizes f(:) for all input sequences 37 terminating in state [q] through state [e] , ( l / 2 ) . [ f (p) + f ( T ) ] 2 f ( E ) . After regrouping this result with (19) and (20), we obtain the sufficient condition for (17) to be satisfied

We can choose E , = f ( b i - d i ) / 2 for i < k - 1 and we need 6 k - 1 = 0, where k is the length of the two paths b and

- d when the origin is chosen at their diverging state. Since 1bk-l - ck-11 = IckPl - dk-ll = A, we obtain

) ' (22) bk-2 d k - 2

- (1 + fl")A2 - 2 f l A

Corollary 2 implies lbk-2 - d k P 2 l / 2 5 A such that (21) is satisfied. Consequently, (17) or equivalently (10) are satisfied, which completes the proof.

APPENDIX C GENERALIZATION OF APPENDIX B

As in Appendix B, we verify whether (21) is satisfied. We also choose E , = f ( b , - d i ) / 2 for i < k - L and Ek-2 = 0, i E { 1, L } where k is the length of the extension of two paths b and d merging at state [z, A] when the time origin is chosen at their diverging state. For 1 b k - l - = 1ck-l - dk-11 = A, (22) becomes

L L

Consequently, (21) is not always satisfied.

APPENDIX D INDUCTIVE QUANTIZER UPDATE

At step k , we consider the pair of states Si(g) and S;C+'(j), with respective metrics Ml and M;". SL(g) and S;C+'(j) contain the same labels a,, z E (1, L} except for z = g where Si(g) has label a3 while Sk+'(g) has label u3 - sLA. We define, for 1 5 j 5 L

As depicted in Fig. 8 for 1 5 j < L, we denote by A",_, = [u2, . . a , aj+l , . . . , UL, U L + ~ ] and A",tl, = [a2, " ' , a j t l - s L A , . . , a ~ , a ~ + 1 + K L + ~ A ] the respective antecedent of states S i ( j + 1) and Sk+'(j + l ) , with associated metrics Mk-l and Mi?;. We finally consider the state Ai?'; = [a2, . . . , aj+l - sLA, . . . , U L , U L + ~ ] , with associated metric MkTY. Note that Ai-_, and satisfy the state structure required by (24) and define Rk:;'''' a3+".." a L ' a L + i l ( j ) , while Ai:\ and Ai:: satisfy the state structure required by (4)

Separating these two contributions, we compute for 1 5 j < L ( U L + l , a L + l + KLflA). and define TPz: ' " I a j + i - - s L A , " ' ) a L l

R k [ai, a3+i, "1, ~ L I ( j + 1) =

( j ) [az,...,a3+i1...,a~,a~+i1 Rk-l

fj (aL+l, aL+l + KL+la) + s L ~ L + i a l + - f L

- SLKL+1Tk2, " ' > a j + l - s L A , " ' ) aL1

FOSSORIER: DYNAMIC QUANTIZATION FOR MAXIMUM LIKELIHOOD SEQUENCE DETECTION 1453

az , ... , a, , ... a L , a L+l 1 [a, a2 ... aj , a,+l, ... a L 1

( M r l )

Fig. 8. State structure for & ( j ) computations, for 1 < j 5 L.

fj2SLA

2 f L . +-

Also (6) provides

( ~ L + I + S K L + , ~ A , ~ L + I + S K L + ~ ( 1 + 1)A) (26)

for which the IKL-1 I terms have already been computed. For j = 0, Fig. 8 has to be modified since the antecedent states are independent of a3+l = al . Applying the same method, we compute

(1) = RFl, ‘ . , a L 1

- sLKL+lTF2’ ” ‘ 1 u L l ( U L + l , aL+1+ KL+lA) 1 + SLKL+l(Ul - S L A ) + -

f L

. (g~k - ai - S(a2, , U L ) - f ~ a ~ + i )

(27)

Note that in (27), the state reference for RI, and T k are the same while they differ in (25). Equations (25) and (27) provide a recursive way to compute R k ’ s and (12) follows from (4).

We represent each trellis state by a vector !E of dimension L. At state Z, any scaled version of the computed metric is the result f ( Z ) of a function f ( ) from the signaling set to the set of real numbers. We introduce the scaled elementary basis Z, = [0, 0, . . . , 0, -sLA, 0, . . . , 01 composed of L - 1 0’s and -SLA at position i and simplify the general notation by defining R 3 ( Z ) = RF”””aL1(j) for Z = [a l , ” . , U L ] .

The position considered, previously in argument, becomes the low index since no time reference is required. From (24), we

SLA

2 f L + -.

express R3(T) = f ( E ) - f ( Z + Z3). Thus Z3 affects only the jth components of R 3 ( Z ) and prohibit the smallest value of the signaling alphabet to occupy that position. The number of R 3 ( Z ) is therefore Cf=o(L-a ) ( f ) ML-’ = L (M-1) MLP1, as j can take L positions with M - 1 values allowed, while the remaining L - 1 positions can take any of the M input values.

For z # j, we easily derive R,(T) - RZ(Z + F 3 ) = R3 (Z) - R, (Z + ’&). We refer any equation of this form as “type-E2.” The total number of possible “type-E2” equations is ( M - 11L-2 = (i) ( M - 1 ) 2 ~ L - 2 9 s a a and j can take (i) positions with M - 1 values allowed each, while the remaining L - 2 positions have M possible choices. For L = 2, “type-ET’ equations are independent of each other since at least one different value appears in each of them. From three known R3(T)’s, a fourth one can be directly evaluated as depicted in Fig. 6 for L = 2 and M = 4. In this figure, ( a , j ) represents the two dimensional vector T considered while pa determines the position affected by F a . For L = 3, some “type-E2” equations become dependent. For i # j # k, we define “type-E3” equations RZ(T) -

e k ) = R k ( Z ) - R k ( Z + E 3 ) - R3(C + ZZ), which are all independent of each other. We notice that most of “type-ET’ equations are included in “type-E3” equations. Since “type- ET’ equations consider only two positions a and j, the only remaining independent “type-ET’ equations are those whose third position is occupied by the smallest value of the signaling alphabet (forbidden in “type-E3” equations for L = 3). We count (i) ( M - 1) = 3( M - 1) of them. Repeating the same argument for z E [2, L] , the number of “type-Ez” equations is (t) (z - 1)(M - I)’, since for (z - 1) “type-Ez” equations, we can choose a out of L positions with M - 1 possible choices each, the remaining L - z positions being fixed to the minimum input value. The number of independent equations obtained is E,”=, (f) (i - 1)(M - I), = L ( M - 1) M L p l - M L + 1. By

(t)

R t ( Z + Z3) - R3(T + Z k ) = R3(T) - R3(T+ F , ) - R3(T + -

1454 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL 44, NO 11, NOVEMBER 1996

using “type-Ei” equations, we efficiently compute R, (37)’s as only additions are required.

[21

[31

ACKNOWLEDGMENT [41

The author wishes to thank Dr. S. OlCer for his detailed and helpful comments on an early version of this paper, as well as Dr. N. T. Gaarder and Dr. S. Lin for many inspiring L6]

discussions. [71

M. J. Ferguson, “Optimal reception for binary partial response chan- nels!” Bell System Tech. J., vol. 51, pp. 493-505, Feb. 1972. S. O l p and G. Ungerboeck, “Difference-metric viterbi decoding of multilevel class-IV partial-response signals,” IEEE Trans. Commun., vol. 42, pp. 1558-1570, Apr. 1994. J. K. Omura, “On the Viterbi decoding algorithm,” IEEE Trans. Inform. Theory, vol. 15, pp. 177-179, Jan. 1969. L. C. Barbosa, “Maximum likelihood sequence estimators: A geometric view,” IEEE Trans. Inform. Theory, vol. 35, pp. 419427, Mar. 1989. R. G. Gallager, Information Theory and Reliable Communication. New York: Wilei, 1968. M. P. C. Fossorier, “Maximum likelihood seauence detection viewed as generalized vector quantization,” in preparation.

REFERENCES

[1] G. D. Forney, Jr., “Maximum likelihood sequence estimation of digital sequences in the presence of intersymbol interference,” IEEE Trans. Inform. Theovy, vol. IT-18, pp. 363-378, May 1972.

Marc P. C. Fossorier (S’90-M’95), for a photograph and biography, see p. 1106 of the September issue of this transactions.