20
Signal Processing 83 (2003) 1839 – 1858 www.elsevier.com/locate/sigpro Constrained quantization Thomas K ampke Forschungsinstitut f ur anwendungsorientierte Wissensverarbeitung FAW, Helmholtzstr. 16, Ulm D-89081, Germany Received 22 October 2002; received in revised form 26 March 2003 Abstract Discrete functions over a continuous domain are approximated by discrete functions with fewer levels. These quan- tizations are endowed with dierent types of constraints such as monotonicity and variational constraints. Quantization functions exactly or approximately minimize the squared error to a given discrete function. Exact algorithms are derived from dynamic programming with nite horizon. All algorithms have polynomial run time. ? 2003 Elsevier B.V. All rights reserved. Keywords: Constrained optimization; Dynamic programming; Labeling algorithm; Minimum square errors 1. Introduction Quantization is the common issue of many problems from signal processing such as compression, image segmentation, adaptation, classication and even learning. In all cases considered, a discrete, real-valued function is to be approximated or “smoothed” by tting it to a discrete function with fewer levels. The number of levels of the approximating function is specied externally and the approximation is then either unconstrained or constrained by further conditions. Allowing constraints to quantization is the focus of this work. Starting from results on unconstrained quantization, it is the intention here to further develop the approach rather than to detail on one application. The particular constraints given below should by no means understood to be exhaustive but they illustrate the comprehension of the approach. The methodology for computing optimal quantizations is inherently discrete rather than a discretization of continuous concepts. The computation of optimal quantizations will be related to computations of shortest paths in the so-called quantization graph under various conditions. These conditions depend on the quantization constraints. The strength of the analogy between optimal quantization and shortest paths varies with the type of constraints. Whenever possible, dynamic programs will be given which are similar to the one for shortest path computations in acyclic graphs. Methods from continuous constrained optimization will be used for arguments and for computational schemes when dynamic programming does not apply. The remainder of this work is organized as follows. Sections 2 and 3 formally introduce quantization problems in its pure version and in constrained versions respectively. The constraints include bounds on E-mail address: [email protected] (T. K ampke). 0165-1684/03/$ - see front matter ? 2003 Elsevier B.V. All rights reserved. doi:10.1016/S0165-1684(03)00104-X

Constrained quantization

Embed Size (px)

Citation preview

Page 1: Constrained quantization

Signal Processing 83 (2003) 1839–1858

www.elsevier.com/locate/sigpro

Constrained quantizationThomas K&ampke

Forschungsinstitut f ur anwendungsorientierte Wissensverarbeitung FAW, Helmholtzstr. 16, Ulm D-89081, Germany

Received 22 October 2002; received in revised form 26 March 2003

Abstract

Discrete functions over a continuous domain are approximated by discrete functions with fewer levels. These quan-tizations are endowed with di3erent types of constraints such as monotonicity and variational constraints. Quantizationfunctions exactly or approximately minimize the squared error to a given discrete function. Exact algorithms are derivedfrom dynamic programming with 6nite horizon. All algorithms have polynomial run time.? 2003 Elsevier B.V. All rights reserved.

Keywords: Constrained optimization; Dynamic programming; Labeling algorithm; Minimum square errors

1. Introduction

Quantization is the common issue of many problems from signal processing such as compression, imagesegmentation, adaptation, classi6cation and even learning. In all cases considered, a discrete, real-valuedfunction is to be approximated or “smoothed” by 6tting it to a discrete function with fewer levels. Thenumber of levels of the approximating function is speci6ed externally and the approximation is then eitherunconstrained or constrained by further conditions.Allowing constraints to quantization is the focus of this work. Starting from results on unconstrained

quantization, it is the intention here to further develop the approach rather than to detail on one application.The particular constraints given below should by no means understood to be exhaustive but they illustrate thecomprehension of the approach.The methodology for computing optimal quantizations is inherently discrete rather than a discretization

of continuous concepts. The computation of optimal quantizations will be related to computations of shortestpaths in the so-called quantization graph under various conditions. These conditions depend on the quantizationconstraints. The strength of the analogy between optimal quantization and shortest paths varies with the type ofconstraints. Whenever possible, dynamic programs will be given which are similar to the one for shortest pathcomputations in acyclic graphs. Methods from continuous constrained optimization will be used for argumentsand for computational schemes when dynamic programming does not apply.The remainder of this work is organized as follows. Sections 2 and 3 formally introduce quantization

problems in its pure version and in constrained versions respectively. The constraints include bounds on

E-mail address: [email protected] (T. K&ampke).

0165-1684/03/$ - see front matter ? 2003 Elsevier B.V. All rights reserved.doi:10.1016/S0165-1684(03)00104-X

Page 2: Constrained quantization

1840 T. K ampke / Signal Processing 83 (2003) 1839–1858

values and monotonicity requirements as well as variational bounds and value spreading. Finally, the jointquantization of two instead of one function is considered. This problem is related to superresolution represen-tation of multiple signals. Structural properties and solution algorithms are proposed in Section 4 for the pureproblem and in Section 5 for constrained problem versions. The algorithms are either exact or approximate.In all cases run time is bounded polynomially in the problem size. Section 6 gives a brief summary andoutlook.:= and :⇔ denote a de6nition. For any function f the set of values where it is de6ned is called its domain

and the set of values that it actually attains is called its range and is denoted by ran(f).

2. Quantization

A discrete function f :D → R is given over a compact domain D=[a; b] of reals where a¡b. Function fattains a 6nite number of values only and will be written as f(x) =

∑N−1i=1 vi1(wi;wi+1)(x); x∈D, where 1A(·)

denotes the indicator function of some set A; 1A(x) = 1 for x∈A and 1A(x) = 0 for x �∈ A. The break pointsw1¡ · · ·¡wN form a partition of D, i.e. w1 =a and wN =b. Break points may be spaced evenly or unevenlyand the two limiting break points at the domain’s boundary will often be skipped for notational convenience.To avoid trivial complications, values of all discrete functions at the very break points will be ignored and,apart from few exceptions, f∈ SN is assumed to have N − 1 proper segments meaning that adjacent levelsare di3erent, i.e. vi �= vi+1 i = 1; : : : ; N − 2.The set Sn contains all step functions over D which attain at most n− 1 values. The distance between two

integrable functions g1; g2 over D will usually be inferred by the standard 2-norm ‖g1 − g2‖2 = ‖g1 − g2‖=√∫D(g1(x)− g2(x))2 dx.

2.1. Pure quantization

An optimal quantization of f∈ SN with respect to Sn, 26 n6N , is given by a function s0n ∈ Sn such that

s0n = argminsn∈Sn‖f − sn‖:As this minimization problem will give rise to several variations that all have additional constraints, it iscalled pure quantization problem. A discrete function and a quantization function which is not optimal areshown in Fig. 1. The quantization being not optimal will become obvious in Section 4.The value ‖f − sn‖ is called quantization error. A quantization function sn will usually be denoted by

sn(x) =∑n−1

i=1 yi1(xi ;xi+1)(x); x∈D, where x1 = a and xn = b. Lower indices such as in sn;0 =∑n−1

i=1 yi;01(xi ;xi+1)

will denote locally optimal or heuristic choices while upper indices will denote globally optimalchoices.

Fig. 1. Discrete function f (bold lines) from S9 and a quantization s5 (thin lines).

Page 3: Constrained quantization

T. K ampke / Signal Processing 83 (2003) 1839–1858 1841

2.2. Related work

The present type of quantization di3ers from the celebrated area of vector quantization, comp. for example[5], with respect to the optimization objective. The objective of (many instances of) vector quantization canbe written as ‖f − sn‖quant =

∑n−1i=0

∫ xi+1

xi(x − yi)2f(x) dx, comp. [2]. A consequence of the objectives being

di3erent is that vector quantization assigns break points where the values (the “energy levels”) of the functionare large while the present form of quantization assigns break points where the change of the signal is large.Irrespective of the di3erences, vector quantization has similar aims such as compression, codebook generation,classi6cation via clustering, etc. Extensions and applications of vector quantization include learning methods,see for example [7], and speech transmission, see for example [3].Quantization for analysing esp. smoothing images is related to total variation minimization. There, typically,

a functional is used that explicitly balances between the smoothing and the approximation issues, comp.[13,14]. These methods are of considerable conceptual diMculties that often prevent their implementation andapplication. Furthermore, the methods often settle for local minimization only.Quantization in the present sense, which could be called functional quantization or signal quantization, has

been investigated for the case of the original functions being continuous. The objective is multiextremal inthat case and globally convergent optimization procedures based on 6xed point methods and global Lipschitzoptimization have been deviced, see [8]. The discrete case in unconstrained form has been related to 6niteoptimization problems and graph algorithms, see [9].Discrete quantization has been applied to non-parametric image segmentation by quantizing intensity his-

tograms of images. The procedure can be used as part of a discrete scale-space approach, see [11], and aspart of a recursion scheme, see [12].

3. Constrained quantization

3.1. Clamped quantization

The quantization function is chosen to be 6xed over one or more intervals and only the remaining sectionsare allowed to vary. The clamped quantization problem is given by

minsn∈Sn

‖f − sn‖

such that sn(x) = vfixij for x∈ (wi; wj);

where (i; j)∈ J ⊂ {1; : : : ; N − 1}2 with j¿ i + 1 and |J |¡n − 1. Distinct open intervals over which thequantization is clamped are assumed to have a void intersection, i.e. they may meet at boundaries but theymust not overlap.

3.2. Bounded quantization

Bounded quantization involves two extra functions fl ∈ SN and fu ∈ SN which serve as lower and upperbounds for the quantizing function. The bounds satisfy fl(x)6f(x)6fu(x) over the domain. The boundedquantization problem is then de6ned by

minsn∈Sn

‖f − sn‖

such that fl(x)6 sn(x)6fu(x):

Page 4: Constrained quantization

1842 T. K ampke / Signal Processing 83 (2003) 1839–1858

A special case of bounded quantization is tube quantization. Lower and upper bound have a constant widthby which they sandwich the original function. The tube quantization problem is formally given by

minsn∈Sn

‖f − sn‖

such that f(x)− c6 sn(x)6f(x) + c:

The value c¿ 0 is 6xed. Tube quantization as well as bounded quantization in general need not have feasiblesolutions if bounds are too tight or if the number of admitted quantization levels is too small.

3.3. Monotone quantization

Monotone quantization involves monotonicity requirements of the quantizing function. These monotonicityrequirements are independent from monotonicity features of the original function. The increasing quantizationproblem is speci6ed by

minsn∈Sn

‖f − sn‖

such that sn ↑;while the decreasing quantization problem is speci6ed by

minsn∈Sn

‖f − sn‖

such that sn ↓ :The quantizing function being increasing, i.e. y16y26 · · ·6yn−1 is denoted by sn ↑ and the quantizingfunction being decreasing, i.e. y1¿y2¿ · · ·¿yn−1 is denoted by sn ↓.

3.4. Variational quantization

Variational quantization is understood as a soft version of bounded quantization. It amounts to pure quanti-zation under the additional constraint of the total variation of the quantizing function being externally bounded.

minsn∈Sn

‖f − sn‖

such that Var(sn)6 c:

The value c¿ 0 is assumed to be given and the total variation of a discrete function sn(x)=∑n−1

i=1 yi 1(xi ;xi+1)(x)is de6ned by Var(sn) :=

∑n−1i=1 |yi+1 − yi|. The motivation of variational quantization stems from ideas of

controlled damping.In the trivial case of the external constant being zero, all feasible quantizing functions are constant and

the variational quantization problem has the solution sn(x) = 1=(b − a)∫ ba f(u) du. Variational quantization

is reasonably posed even if the quantization function is admitted to have the same number of levels as theoriginal function, i.e. even in the case n= N .

3.5. Exponential quantization

Pure optimal quantization functions follow in monotonicity their original functions. This means that when-ever f is increasing (decreasing) over its complete domain, the optimal quantization is also increasing

Page 5: Constrained quantization

T. K ampke / Signal Processing 83 (2003) 1839–1858 1843

(decreasing) over that domain. Monotone original functions can be assumed to be increasing without lossof generality. The quantizing function may then be required to have increased level di3erences.

minsn∈Sn

‖f − sn‖

such that yi+1 = c · yi for i = 1; : : : ; n− 2

with some prescribed value c¿ 1. The level constraints apply independent from the width of segments.Exponential quantizations, in particular, are monotone quantizations. The exponential quantization problemsmakes sense even if the quantizing function is required to have the same number of levels as the originalfunction. While the foregoing versions of quantization aim at smoothing, exponential quantizations typicallyhave a larger value spread than their original functions.Exponential quantization can be motivated from features of the human hearing and vision systems. Increasing

the energy of a sequence of stimuli by a constant amount does not lead to hearing or seeing the stimuli atconstant increases. In order to perceive constant increases, the energy of the stimuli must be increased bya constant multiple. Exponential quantization can help building corresponding transfer functions from giventransfer functions.

3.6. Simultaneous quantization

Instead of quantizing one function, two functions can be simultaneously quantized. Let therefore two originalfunctions f1; f2 be given. When the functions have the same domain, simultaneous quantization results in theformal approximation problem

s0n = argminsn∈Sn‖f1 − sn‖2 + ‖f2 − sn‖2:This problem is equivalent to quantizing the average of the original functions which is the following purequantization problem

s0n = minsn∈Sn

∣∣∣∣∣∣∣∣f1 + f2

2− sn

∣∣∣∣∣∣∣∣ :

The two problems being equivalent can be seen as follows. For any segment over which the quantizingfunction is constant, the error of simultaneous quantization and that of pure quantization of the average areminimized by the same level.Simultaneous quantization becomes interesting when domains of the original functions may overlap and

a constraint is added. This is motivated from subpixel approaches or high-resolution techniques in imageprocessing, comp. [10]. There, in the simplest case, two so-called line cameras are supposed to result inoverlaying images. Each level of either original function amounts to the intensity sensed by one pixel of thesensor line so that each discrete function represents a linear image, comp. Fig. 2. From these, a single functionwith 6ner than original pixel resolution is sought suggesting to call its segments subpixels.The aim is to 6nd a discrete function over the union of the individual domains which simultaneously

minimizes quantization errors such that averages of adjacent levels equal the enclosed level of one of theoriginal functions, comp. Fig. 3. This is motivated from the integration feature of intensity measurementsby image sensors. The functions f1 and f2 are assumed to have the same number of break points. Theirsegments are interleaved with an o3-set of half the individual break point distance. This is supposed to resultfrom an external alignment process for the two images. The formal objective of subpixel quantization is thefollowing minimization

minsn∈Sn

‖f1 − sn‖2 + ‖f2 − sn‖2

such thatyi + yi+1

2= vi for i = 1; : : : ; n− 2:

Page 6: Constrained quantization

1844 T. K ampke / Signal Processing 83 (2003) 1839–1858

Fig. 2. Two original functions (bold lines and double lines) fromS5 and quantization function (thin lines) from S10.

Fig. 3. Two levels of the quantization function (thin lines) andenclosed level of one of the original functions (bold lines). Theenclosed level equals the average of the two other levels.

The levels of the two original functions are denoted by vi when sorted by increasing left points of theirsegments. All constraints form a system of n − 2 linear equations in n − 1 variables. The feasibility set ofsimultaneous quantization is a restclass of a nullspace of dimension one. The average function (f1 + f2)=2typically does not belong to the feasibility set.

4. Pure quantization algorithms

Several structural properties of the pure quantization problem including algorithms for solving the purequantization problem to optimality have been established earlier, see [9]. These are brieNy repeated andextended.An optimal quantization function has break points only where the original function has. Moreover, the

direction of monotonicity at each common break point is the same as that of the original function. The lattermeans that whenever the original function increases (decreases) at some break point that it has in commonwith an optimal quantization, the optimal quantizing function is also increasing (decreasing) there. Consideringthe functions of Fig. 1 at the common break point x4 shows that the original function decreases there whilethe quantizing function increases there. Thus, the particular function s5 is not an optimal quantization of f.The optimal pure quantization always exists and it is mass preserving which means that

∫ ba s0n(x) dx =∫ b

a f(x) dx. The normalized integral of the original function is denoted by E = 1=(b − a)∫ ba f(x) dx. The

constant function at level E is called center line. Quantization is not coarsable in the sense that generally

minsn∈Sn

‖f − sn‖ �= minsn∈Sn

‖s0m − sn‖;

where s0m = argminsm∈Sm‖f − sm‖ for n¡m¡N .Whenever the quantizing function is constant over an interval (xi; xi+1), the optimal level is uniquely given

by averaging or forming the local center line

yi = yi(xi; xi+1) =1

xi+1 − xi

∫ xi+1

xif(x) dx:

Thus, the best pure quantization function with given break points is unique and the computing the optimalpure quantization function essentially reduces to 6nding its partition. The optimal levels are independent fromeach other. This independence and the candidate set for break points being 6nite allows to transform the purequantization problem to a shortest path problem in a certain graph.

Page 7: Constrained quantization

T. K ampke / Signal Processing 83 (2003) 1839–1858 1845

Fig. 4. Partial sketch of the quantization graph Gf (thin lines) and a path representing a quantization function with same break pointsas s5 (bold lines) of the constellation from Fig. 1. All except one edge label are omitted.

Each function f is therefore assigned a complete, directed acyclic graph Gf = (V; E),called quantizationgraph with N vertices such that the break point wi corresponds to vertex i; this vertex is also denoted wi.An arc (i; j) is introduced whenever i¡ j, i.e. wi ¡wj. The arc (i; j). also denoted (wi; wj), is endowed withcost cij =

∫ wj

wi(f(x)− y(wi; wj))2 dx.

The cost coeMcients satisfy cii+1 = 0 for all i = 1; : : : ; N − 1 and cij + cjk6 cik for all 16 i¡ j¡k6N .The latter is a reversed triangle inequality meaning that taking certain detours is cheaper than taking directconnections in Gf. This corresponds to the obvious fact that two levels approximate the original functionbetter than one level over any 6xed subset of the domain.An optimal quantization s0n corresponds to a shortest or smallest weight path in Gf from node 1 to node

N which uses exactly n− 2 intermediate vertices. The graph Gf and the path of a quantization are sketchedin Fig. 4.A shortest path for pure quantization can be computed by the subsequent dynamic program based on value

iteration. Therefore, values Ci(k) denote the optimal squared quantization error over (wi; b) when exactlyk break points are used between wi and b–the bounding points not counted for k. The values Ci(k) aredecreasing in both arguments, i.e. Ci(k)¿Ci+1(k) and Ci(k)¿Ci(k + 1) for all admissable indices.

Dyn:1. (Initialization). Ci(0) = ciN i = 2; : : : ; N − 1; l(0) = c1N .2. (Iteration). For k = 1; : : : ; n− 2 do

(a) Computation of the value of a shortest path from 1 to N with exactly k intermediate vertices

mini=2;:::;N−k

c1i + Ci(k − 1) = l(k);

argmini=2; :::;N−k c1i + Ci(k − 1) = p1(k):

(b) Value update for insertion of one additional intermediate point

Ci(k) = minj=i+1;:::;N−k

cij + Cj(k − 1);

argminj=i+1; :::;N−k cij + Cj(k − 1) = p(i; k)

for i = 2; : : : ; N − (k + 1).3. (Termination). Output shortest paths s02; : : : ; s

0n.

The procedure terminates with optimal quantization errors ‖f − s02‖ =√l(0); : : : ; ‖f − s0n‖ =

√l(n− 2).

Partitions of pure optimal quantizations can be found by tracing the pointers p and p1.Step 2 (a) of Dyn requires O(N ) computations and step 2 (b) requires O(N 2) computations so that the

procedure requires O(nN 2) overall computation steps. In case of two level quantization, i.e. in case of onlyn − 2 = 1 intermediate vertex, the procedure needs only one iteration in which step 2 (b) can be omitted.Hence O(N ) computations suMce in this case.Shortest paths with a 6xed number of arcs can also be computed by a modi6cation of the Dijkstra algorithm.

The Dijkstra algorithm uses tentative and permanent labels, comp. [1]. The cost of a shortest path from w1

Page 8: Constrained quantization

1846 T. K ampke / Signal Processing 83 (2003) 1839–1858

to wi having exactly k intermediate break points is eventually denoted by Di(k). While the labels Ci(k) ofprocedure Dyn remain 6xed once they are computed, labels Di(k) in the subsequent algorithm ModD typicallyare overwritten until declared permanent. The variables Di(k) contain upper bounds on the correct values andare decreased until the correct values are attained.

ModD:1. (Initialization). Di(k) = c1i ; i = 2; : : : ; N and k = 0; : : : ; n− 2; Tk = {2; : : : ; N}, k = 1; : : : ; n− 2.2. (Iteration). For k = 1; : : : ; n− 2 do

As long as the candidate list Tk �= ∅ do(a) Compute i = argminj∈Tk Dj(k).(b) Tk = Tk − {i}.(c) ∀ j∈Tk compute pred(j) = i, if Di(k − 1) + cij ¡Dj(k).(d) ∀ j∈Tk compute Dj(k) = min{Di(k − 1) + cij; Dj(k)}.

3. (Termination). Output shortest paths s02; : : : ; s0n.

The value of an optimal partition is given by the 6nal label DN (n − 2) and the partition of an optimalquantization is given backwards by successive application of the predecessor function pred(N ), pred(2)(N ); : : : ;pred(n−2)(N ). Procedure ModD has the same O(nN 2) complexity as procedure Dyn.The modi6ed Dijkstra algorithm as well as the dynamic program in their stated forms both assume that

the squared quantization errors cij are known. These coeMcients must be computed from the original discretefunction. The coeMcient computations can be interleaved with either of the foregoing algorithms or they canbe accomplished by a stand alone procedure. EMcient computations are based on the formula

cij =∫ wj

wi

f2(x) dx − 1wj − wi

(∫ wj

wi

f(x) dx)2

:

This allows an O(N 2) procedure for computing all cost coeMcients for a discrete function f.

Coe=:1. (Initialization). Lii = Sii = 0 for i = 0; : : : ; N − 1.2. (Iteration). For i = 0; : : : ; N − 1 do

For j = i + 1; : : : ; N do(a) Lij = Lij−1 + vj−1 · (wj − wj−1).(b) Sij = Sij−1 + v2j−1 · (wj − wj−1).(c) cij = Sij − 1

wj−wi· L2ij.

3. (Termination). Output cost coeMcients cij for 16 i¡ j6N .

5. Constrained quantization algorithms

Constraints on quantization functions typically imply that their levels on di3erent segments cannot be chosenindependently. This will weaken but not eliminate the relation to shortest paths in the quantization graphs. Thetransition from unconstrained to constrained optimization seems to motivate the use of Lagrange relaxation.This, however, is not advantageous since the break points are discrete and Lagrangean methods detect criticalpoints and thus local optima at best. The diMculties of the Lagrange approach will be sketched for monotonequantizations.The focus for exact optimization procedures lies on dynamic programming. Dynamic programs will consist

of decisions of two types. Decisions of the 6rst type reNect the particular constraints while decision of thesecond type are “free” in the sense of pure quantization. When the resulting quantization is overall feasible,

Page 9: Constrained quantization

T. K ampke / Signal Processing 83 (2003) 1839–1858 1847

it is a candidate for constrained optimization. If the resulting quantization is infeasible, it will simply beignored.Approximation schemes are tailored to the particular problem variations.

5.1. Clamped quantization

Clamped quantizations can be computed with the help of a straightforward modi6cation of the quantizationgraph. All arcs that allow paths which deviate the vertices wi; wj with (i; j)∈ J are eliminated as well as allarcs—if there are any—which have at least one vertex strictly between wi and wj. Equivalent to eliminatingthese arcs is to set their costs to in6nity. The arcs (wi; wj) with (i; j)∈ J receive the modi6ed cost valuescfixij =

∫ wj

wi(f(u)− vfixij )2 du.

Clamping has local e3ects only which means that the variable parts of the quantization function are nota3ected by the clamped segments. Optimal clamped quantizations can then be computed as shortest paths inthe modi6ed quantization graph by procedures Dyn and ModD.

5.2. Bounded quantization

The upper and lower bounds, which have the same break points as the original function, obviously restrictthe admissable levels of the quantization function. Moreover, the bounds may entail the placement of abreak point. The quantizing function is required to have a break point at an original break point wi if andonly if

supx∈(wi−1 ;wi+1)

fl(x)¿ infx∈(wi−1 ;wi+1)

fu(x):

This situation is depicted in Fig. 5. To avoid technical diMculties, function values at the break points areignored. An interval (wi; wj) may be quantized by a single level if and only if

Fl(i; j) := supx∈(wi;wj)

fl(x)6 infx∈(wi;wj)

fu(x)= : Fu(i; j):

In case of the last inequality being valid, any constant between Fl(i; j) and Fu(i; j) is a feasible single levelover the interval (wi; wj). The best constant is the optimal level for pure quantization if this value is feasible.Otherwise, that bound is chosen which is closer to the optimal pure quantization level. This choice is impliedby the quantization objective being symmetric in over- and underestimating the original function. Formally,the best level of bounded quantization is as follows for Fl(i; j)6Fu(i; j).

yB(wi; wj) =

y(wi; wj); if Fl(i; j)6y(wi; wj)6Fu(i; j)

Fl(i; j); if y(wi; wj)¡Fl(i; j)

Fu(i; j); if Fu(i; j)¡y(wi; wj):

Whenever a particular interval may not be quantized by a single level, computations of best quantizationfunctions may not use the corresponding arc in the quantization graph. This is expressed by assigning in6nitecost to that arc. Thus, in principle, the cost coeMcients can be computed by the subsequent procedure.

Coe=bound:1. (Initialization). Lii = Sii = 0 for i = 0; : : : ; N − 1.2. (Iteration). For i = 0; : : : ; N − 1 do

For j = i + 1; : : : ; N do(a) Lij = Lij−1 + vj−1 · (wj − wj−1).(b) Sij = Sij−1 + v2j−1 · (wj − wj−1).

Page 10: Constrained quantization

1848 T. K ampke / Signal Processing 83 (2003) 1839–1858

Fig. 5. Any feasible bounded quantization function (not shown)must have a break point at wi under these bounds.

Fig. 6. Removal of a monotonicity violation by leveling. Onlypart of a pure quantization (thin lines) and a single segment ofthe resulting monotone quantization (dotted line) are sketched.

(c) Computation of cost coeMcient

cij =

Sij − 1wj − wi

· L2ij ; if Fl(i; j)6y(wi; wj)6Fu(i; j)

∫ wj

wi

(f(x)− yB(wi; wj))2 dx; if yB(wi; wj) �∈ [Fl(i; j); Fu(i; j)] �= ∅

∞; if Fl(i; j)¿Fu(i; j):

3. (Termination). Output cost coeMcients cij for 16 i¡ j6N .

Optimal bounded quantizations can be determined by the procedures Dyn and ModD run with the previouslycomputed cost coeMcients. A solution is feasible in the sense of bounded quantization if either of the algorithmsterminates with a 6nite optimal value. Otherwise, the bounds are so restrictive that no quantization functionwith the speci6ed number of levels is feasible.In order to determine the minimum number of levels necessary for a bounded quantization function to

become feasible, algorithm Dyn or algorithm ModD may be run for increasing values of n until a 6niteobjective value is reached. In this mode, the worst case complexity of both algorithms is O(N 3).

5.3. Monotone quantization

Monotone quantizations are required to be increasing without loss of generality. Whenever an optimal purequantization violates a monotonicity constraint, this violation can be removed by local deformations for breakpoints remaining 6xed. This sets free one level of the quantization function as sketched in Fig. 6. The highersegment of the quantization is decreased while the lower segment is increased until they reach equality atlevel y(xi; xi+2). This leads to feasibility over the given part of the domain. The quantization objective impliesthat further movement of the segments should be avoided as long as the break points are 6xed. Since thedeformed quantization has only one e3ective level where the pure quantization had two levels, one additionallevel may be spent elsewhere. However, this option need not be chosen, since the sketched constellation maybe encountered everywhere on the domain. For example, the optimal increasing quantization of a decreasingfunction is a single level function.

Page 11: Constrained quantization

T. K ampke / Signal Processing 83 (2003) 1839–1858 1849

The foregoing local removal of monotonicity violations may cause other monotonicity conditions to becomeviolated. This is similar to active set algorithms, comp. for example [6], and amounts to a major diMculty ofany algorithm for monotone quantization. Another, more technical diMculty is that the number of admittedlevels need not be fully exhausted.

5.3.1. ApproximationAs a 6rst step, a monotone quantization for a pure optimal quantization with its given partition is computed.

Therefore, local removals of monotonicity violations are repeated over possibly growing regions potentiallystarting from several violation intervals. Informally, an interval (xi; xj) is selected with two e3ective levelsonly that are decreasing rather than increasing. Such an interval may cover more than two segments of theinitial partition, i.e. j¿ i+2 is admitted. The two levels are replaced by y(xi; xj) and the procedure is repeatedas long as two adjacent levels of the quantizing function violate the monotonicity condition. Formally, thisprocedure is as follows.

Feasmon:1. (Initialization). Given pure optimal quantization s0n with partition x1; : : : ; xn and levels y(x1; x2); : : : ;

y(xn−1; xn).2. (Iteration). As long as there is an interval (xi; xj) with two levels y(xi; xk)¿y(xk ; xj) for some xk ∈ (xi; xj)

do• Replace y(xi; xk) and y(xk ; xj) by y(xi; xj).

3. (Termination). Output monotone quantization.

All quantizations including the terminal quantization of the foregoing procedure are best pure quantizationsfor coarsenings of the initial partition. The sequence in which intervals are therefore selected a3ects theterminal quantization. The resulting quantizations may even have di3erent numbers of levels.

5.3.2. Exact solutionAn optimal monotone quantization can be computed by a dynamic program which is a modi6cation of the

dynamic program for optimal pure quantizations. Therefore, Cmoni (k) denotes the optimal value of a monotone

quantization over the interval (wi; b) with at most (not necessarily exactly!) k intermediate break points. Thecorresponding quantization is denoted by smon

i (k) and fl(smoni (k)) denotes its 6rst level which is the level of

the leftmost quantization segment; this segment begins at wi.

Dynmon:1. (Initialization). Cmon

i (0) = ciN i = 2; : : : ; N − 1.2. (Iteration). For k = 1; : : : ; n− 2 do

• Computation of value update of a shortest path for monotone quantizations from i to N with at mostk intermediate vertices

Cmoni (k) =

{min{minj∈J¡ {cij + Cmon

j (k − 1)};minj∈J= {cij + Cmonj (k)}}; if J¡ ∪ J= �= ∅

ciN ; if J¡ ∪ J= = ∅;where

J¡ := {j∈{i + 1; : : : ; N − k|y(wi; wj)¡fl(smonj (k − 1))}

J= := {j∈{i + 1; : : : ; N − k|y(wi; wj) = fl(smonj (k))}

for i = 2; : : : ; N − (k + 1).3. (Termination). Output shortest paths smon

2 ; : : : ; smonn .

Page 12: Constrained quantization

1850 T. K ampke / Signal Processing 83 (2003) 1839–1858

Fig. 7. Two level function (bold lines) and two level variational quantization (thin lines) with variational bound c¿ 0. This two levelquantization does not coincide with an optimal pure quantization that has any prescribed partition.

Pointers to optimal quantizations are omitted for notational ease. The idea of the algorithm is to extend thequantization where the monotonicity constraint is satis6ed automatically by choosing best levels in the senseof pure quantization. Whenever this is not possible, i.e. whenever some value j is missing from J¡ ∪ J=, thedeformation argument from Fig. 6 leads to a pure quantization level that extends further to the right by atleast one break point. This is tested for the next value of j by the dynamic computation of Cmon

i (k). When themonotonicity constraint is satis6ed with slack, i.e. when the new level lies strictly below the 6rst level of thebest monotone quantization adjacent to the right, one additional break point is spent. This formally accountedfor by j∈ J¡. When the monotonicity constraint is satis6ed with equality, no additional break point is spentand hence the value k in step 2 is not decreased. This is formally accounted for by j∈ J=. The sets J¡ andJ= are computed for each pair of indices (i; k).

5.4. Variational quantization

Variational quantization di3ers from all foregoing constraint versions in single level decisions typicallyhaving global e3ects. This means that changing one level may force to change other levels in order tomaintain validity of the variational constraint. Even more, optimal variational quantizations cannot be computedby choosing some conditions under which an otherwise unconditional quantization then freely adjusts to theoptimum; this was feasible in all foregoing cases. For variational quantization such conditions do not exist asillustrated in Fig. 7. The diMculty of variational quantization becomes obvious when restricting the problemto a 6xed partition, see below. However, it can be shown that the optimal variational quantization has thesame monotonicity pattern as the optimal pure quantization with respect to the same partition. This means thatyvarij ¡yvar

jk if and only if y(wi; wj)¡y(wj; wk) when {· · ·¡wi ¡wj ¡wk ¡ · · ·} is the underlying partitionand yvar

ij and yvarjk denote the levels of the variational quantization over the intervals (wi; wj) and (wj; wk)

respectively. The latter can be seen from Fig. 8.

5.4.1. General approximationPure quantizations can serve as approximations of variational quantizations in several ways. For example,

a crude approximation may compute the maximum value of a number k such that

1. k6 n and2. Var(s02); : : : ;Var(s

0k)6 c¡Var(s0k+1).

The optimal pure quantization s0k then serves as approximation of the optimal variational quantization from Sn.

Page 13: Constrained quantization

T. K ampke / Signal Processing 83 (2003) 1839–1858 1851

Fig. 8. Some cases of di3erent monotonicity patterns of the pure quantization s0;n (thin lines) which is increasing over the indicatedregion and the optimal variational quantization svarn (bold lines) which is decreasing there. Both quantizations have the same partition.Moving one level of the variational quantization in the indicated direction until both levels become equal decreases the quantization error.Also, it sets free some amount of variation which can be used to lower the quantization error outside the indicated region contradictingoptimality of svarn for the given partition.

Fig. 9. The quantization function (thin lines) is assumed to exceed the variational bound. Scaling is performed relative to the center line(dotted line) resulting in the scaled quantization function (bold lines) where the scaling factor is assumed here to be c=Var(s0n) =

12 . The

original function f is not shown.

If the optimal pure quantization happens to satisfy the bound on the total variation, it obviously isan optimal variational quantization. Otherwise, an approximation of the optimal variational quantization isgiven by scaling all excesses from the center line, see Fig. 9. The scaling factor is the same for allexcesses.Formally, the approximation is given by

sn(x) =

{s0n; if Var(s0n)6 c;

s′n(x); if Var(s0n)¿c;

where

s′n(x) =n−1∑i=1

y′i1(xi ;xi+1)(x) with y′

i = E − (E − yi)c

Var(s0n):

The scaled function s′n(x) satis6es the variational bound exactly which means that Var(s′n) = c. Wheneveradjacent levels of the optimal pure quantization are di3erent, which they typically are, they are di3erent forthe scaled quantization.

5.4.2. Approximate solutions for monotone functionsExact computations of optimal variational quantizations can be given for monotone functions. Whenever

the optimal pure quantization exceeds the given limit for the total variation, this limit becomes sharp forvariational quantization which means that the optimal variational quantization attains the variation bound. Atwo level quantization with 6xed break point x and total variation c is of the form y1(a;x) + (y + c)1(x;b).

Page 14: Constrained quantization

1852 T. K ampke / Signal Processing 83 (2003) 1839–1858

Fig. 10. Partial quantization graph with cost assignment for variational quantization with two levels attaining the variation bound.

The optimal value for level y is given by

y(x) = argminy

∫ x

a(f(u)− y)2 du+

∫ b

x(f(u)− (y + c))2 du=

∫ ba f(u) dub− a

− cb− xb− a

:

The optimal variational quantization with one break point corresponds to a shortest path in the quantizationgraph with costs being chosen as indicated in Fig. 10. The optimal variational quantization with two levelscan be found by the subsequent 6nite enumeration scheme for the intermediate break point which amounts toenumerating all paths from a to b with two arcs.

x02 = argminwi∈{w2 ;:::;wN−1}

∫ wi

a(f(u)− y(wi))2 du+

∫ b

wi

(f(u)− (y(wi) + c))2 du:

The case of more than two levels requires—among others—to compute the joint placement of the 6rst andlast levels. Let the partition of a quantization function be 6xed with x2 =wi and xn−1 =wj. The optimal levelsattaining the variational bound are then given by

yvar(wi; wj) := argminy

∫ wi

a(f(u)− y)2 du+

∫ b

wj

(f(u)− (y + c))2 du

=

∫ wi

a f(u) du+∫ bwjf(u) du

wi − a+ b− wj− c

b− wj

wi − a+ b− wj:

Noteworthy, the value yvar(wi; wj) indicates the quantization level over the interval (a; wi) rather than overthe interval (wi; wj). The special case wi = wj leads to the optimal level formula for the two level situation.The previous construction of the leftmost and rightmost levels for 6xed partitions allows to modify an

optimal pure quantization whenever it exceeds the variational bound. The leftmost and rightmost levels arereplaced by the optimal levels and all other levels below the lower bound are increased to that bound andall levels above the upper bound are decreased to the upper bound. More precisely, when the quantizations0n =

∑n−1i=1 yi1(xi ;xi+1) satis6es Var(s0n)¿c, it is replaced by s′n =

∑n−1i=1 y′

i1(xi ;xi+1) so that

y′1 := yvar(x2; xn−1)

y′i :=

yvar(x2; xn−1); if yi ¡yvar(x2; xn−1)

yi; if yvar(x2; xn−1)6yi6yvar(x2; xn−1) + c; i = 2; : : : ; n− 2

yvar(x2; xn−1) + c if yi ¿yvar(x2; xn−1) + c

y′n−1 := yvar(x2; xn−1) + c:

Monotonicity of the original function ensures monotonicity of the optimal pure quantization which ensuresmonotonicity of the replacement quantization. Computing the replacement quantization is of time complexityO(N ), since it is essentially only the one level yvar(x2; xn−1) which must be computed for f∈ SN . All otherO(n) operations are comparisons possibly followed by substitutions.

Page 15: Constrained quantization

T. K ampke / Signal Processing 83 (2003) 1839–1858 1853

In contrast to scaled quantization, the replacement quantization may have fewer levels than the optimal purequantization since adjacent segments may be replaced by identical values. The number of di3erent adjacentlevels may be preserved but it never increases.The replacement quantization need not be optimal. However, the replacement operations can be executed

successively until no further improvement occurs. This is analogue to the method of successive approximationsfor computing 6xed points. The replacement quantization of any quantization function sn is formally denotedby repc(sn). The initial quantization of the following repeated replacement procedure may or may not be anoptimal pure quantization but is required to be monotone.

Reprep:1. (Initialization). Given monotone quantization function sn for monotone f and variational bound c¿ 0.2. (Iteration). While sn �= repc(sn) do

• sn = repc(sn).3. (Termination). Output monotone quantization sn satisfying the variational bound.

The complexity of the procedure is O(nN ), since each iteration requires at most O(N ) steps and the numberof iterations is bounded by n. The reason for the last bound is that the number of di3erent adjacent levelsdecreases by at least one in each replacement iteration. Repeated replacement is not guaranteed to lead to anoptimal variational quantization even if initialized with an optimal pure quantization.

5.4.3. Exact solution for monotone functionsAn exact algorithm for variational quantization of a monotone function assumes the leftmost and rightmost

levels to be computed as above and then to 6ll-in the enclosure by pure quantization. If pure quantizationleads to an overall monotone quantization satisfying the variation condition, then the overall quantization isa candidate of the optimal solution. Otherwise it is rejected. The procedure will be run over a suMcientlylarge set of candidate partitions for the two extreme segments. The latter is in loose analogy to the dynamicprogram for monotone quantization of not necessarily monotone functions f.The cost incurred over the two outer segments is denoted by

Cvarouter(wi; wj) :=

∫ wi

a(f(u)− yvar(wi; wj))2 du+

∫ b

wj

(f(u)− (yvar(wi; wj) + c))2 du;

where wi; wj ∈{w2; : : : ; wN−1} and wi6wj. The complexity of computing an outer cost value is O(i+(N−j)).The cost of the enclosure will be computed according to the following term.

C(wi; wj; k; y; y + c) :=

0; if wi = wj

mins0k : ran(s0k )⊆[y;y+c]

∫ wj

wi

(f(u)− s0k(u))2 du; if wi ¡wj:

The optimal pure quantization of the enclosure cost is considered only between the break points wi andwj so that there are k − 2 break points in the interior (wi; wj). When no optimal pure quantization withran(s0k) ⊆ [y; y + c] exists, the value of the cost term is in6nite.Optimal variational quantization with at most k intermediate break points is then given as a solution of the

equation

Cvar(k) = minwi;wj∈{w2 ;:::;wN−1} :wi6wj

{Cvarouter(wi; wj) + C(wi; wj; k − 2; yvar(wi; wj); yvar(wi; wj) + c)}:

The intricate proof of the optimal variational quantization being a solution of the forgoing equation is based onthe Kuhn Tucker conditions for inequality-constrained optimization, see for example [4]. The formal proof isomitted here. The equation gives rise to a dynamic computation scheme. Its striking diMculty is the evaluationof the enclosure costs which require the computations of O(N 2) optimal pure quantizations.

Page 16: Constrained quantization

1854 T. K ampke / Signal Processing 83 (2003) 1839–1858

Fig. 11. Partial sketch of the quantization graph from Fig. 4 with extended labels as indicated for the arc (w1; w5).

A straightforward procedure which computes the optimal pure quantizations independent from each otherhas time complexity bound O(N 2) ·O(nN 2)=O(nN 4). The computation of the enclosure cost is diMcult onlywhen the enclosure contains more break points than the optimal quantization is allowed to have. Otherwise, theoptimal pure quantizations coincide with the original function over the enclosure. Though the time complexityof the straightforward approach is polynomial, it is practically intractable and a procedure with interleavedpath computations is searched for.Since the computations for variational quantization require to test the range of optimal pure quantizations

to lie in some given interval, the arcs of the quantization graph receive a second label. This label denotes thelevel of any segment that is constant over some interval as indicated in Fig. 11.The computation of all pure quantizations corresponds to an all pairs shortest path problem with bounded

number of intermediate vertices. This can be solved by a variation of the dynamic program for pure quan-tization. Let therefore Cij(k) denote the cost of a shortest path from wi to wj using at most k intermediatevertices. The all pairs shortest path problem with at most n−2 intermediate vertices can be solved as follows.

Dynall:1. (Initialization). Cij(0) = cij for 16 i6 j6N , where cii = 0.2. (Iteration). For k = 1; : : : ; n− 2 do

• Cij(k) = mini6s6j

{cis + Csj(k − 1)} for all 16 i6 j6N .

3. (Termination). Output shortest path matrices (Cij(0))16i6j6N ; : : : ; (Cij(n− 2))16i6j6N .

The output matrices store the squared quantization errors for all optimal pure quantizations s02(wi; wj); : : : ;s0n(wi; wj). Keeping track of the minimum and maximum level of each shortest path is omitted for simplicitybut can be achieved along the minimization in step 2 using the level information from label extension in thequantization graph. The algorithm works for all given functions including non-monotone functions. Since thevariational bound c does not enter the all pairs computations, these computations can be used in the sense ofpreprocessing for variational quantization.The run time of algorithm Dynall is O(nN 3) since each of the O(n) iterations of step 2 requires O(N 2)

maximum computations that each can be performed in O(N ) time.The computation of the optimal variational quantization with at most n− 1 levels now proceeds along the

following procedure for n¿ 4; for n = 2 the quantization function is a constant and for n = 3 the problembecomes two level quantization which was solved above.

Dynvar:1. (Initialization). Input variational bound c¿ 0 and shortest path matrix (Cij(n− 4))16i6j6N .2. (Iteration). Computation of

Cvar(n− 2) = minwi;wj∈{w2 ;:::;wN−1}:wi6wj

{Cvarouter(wi; wj) + C(wi; wj; n− 4; yvar(wi; wj); yvar(wi; wj) + c)}:

3. (Termination). Output optimal variational quantization.

Page 17: Constrained quantization

T. K ampke / Signal Processing 83 (2003) 1839–1858 1855

Fig. 12. Optimal pure quantization (thin lines) with two monotonicity runs and variational quantization (bold lines) which is chosen tohave the same partition and the same monotonicity runs of which only the three extreme segments are shown. The total variation isdecomposed into c = c1 + c2. The original function f is not shown.

The complexity of Dynvar is O(N 3) since O(N 2) pairs of break points are considered in step 2 and eachcomputation of extreme levels and their cost Cvar

outer(wi; wj) requires time O(N ). The complexity of computingany shortest path matrix is neglected here since such matrices are used as input.

5.4.4. Dedicated approximations for non-monotone functionsThe diMculty of quantizing non-monotone functions under the constraint of bounded variation is the con-

struction of those segments which constitute the total variation. Between these, the quantization function mustreceive monotone 6ll-in. In case the original functions f are monotone, only two segments constitute thetotal variation of the quantizing functions and these are the leftmost and the rightmost segments; their levelswere computed in the foregoing section as yvar(wi; wj) and yvar(wi; wj) + c. In the non-monotone case thesesegments are the extreme segments of monotonicity runs of the quantizing function.The segments that bound the monotonicity runs of the quantizing function cannot be taken from the original

function for several reasons. First, the number of segments of the quantization function may be required tobe even less than the number of monotonicity segments of the original function. Second, a monotonicity runof the quantizing function is not guaranteed to coincide or be otherwise related to a monotonicity run of theoriginal function.The desired bounding segments of monotonicity runs will be constructed from the monotonicity runs of

optimal pure quantizations, comp. Fig. 12. This allows to extend the replacement approach of the foregoingsection to non-monotone functions.The optimal pure quantization s0n is assumed to exceed the variation bound, i.e. Var(s0n)¿c, and it is

assumed to have k¿ 2 monotonicity runs. These extend between the intervals I0; : : : ; Ik where the 6rst runcan be assumed to be monotone increasing. If it were decreasing, the original function f and its optimalpure quantization s0n can be inverted to −f and −s0n respectively. The corresponding extreme levels are thendenoted by y; y+ c1; y+ c1− c2; : : : ; y+ c1− c2 + · · ·+(−1)k+1ck , where c1 + · · ·+ ck = c with c1; : : : ; ck¿ 0.The extreme levels are optimized by the objective function

k∑i=0

∫Ii(f(x)− (y + c1 − c2 + · · ·+ (−1)i+1ci))2 dx:

Replacing ck = c − c1 − · · · − ck−1 in the last terms leads tok−1∑i=0

∫Ii(f(x)− (y + c1 − c2 + · · ·+ (−1)i+1ci))2 dx +

∫Ik

(f(x)− (y + c1 − c2 + · · ·+ (−1)k+1ck))2 dx

Page 18: Constrained quantization

1856 T. K ampke / Signal Processing 83 (2003) 1839–1858

=k−1∑i=0

∫Ii(f(x)− (y + c1 − c2 + · · ·+ (−1)i+1ci))2 dx

+∫Ik

(f(x)− (y + c1 − c2 + · · ·+ (−1)k+1(c − c1 − · · · − ck−1))2 dx

= : F(y; c1; : : : ; ck−1):

The optimization problem

miny;c1 ;:::;ck−1

F(y; c1; : : : ; ck−1)

can be solved from a linear system, since the critical point condition @F=@y=0; @F=@c1 = 0; : : : ; @F=@ck−1 = 0consists of k linear equations. To avoid notational overload, the system is not given explicitly. Also, it isstated without proof that the sign-unrestricted o3sets attain positive values, i.e. c1; : : : ; ck−1¿ 0.In order to obtain a complete quantization function, the missing levels between the extreme levels must

be 6lled-in. This can be done by using the pure optimal quantization or by locally replacing it so that theintended monotonicity conditions are satis6ed. Let therefore I = (xi; xi+1) be an interval where the optimalpure quantization attains value y0

i . The interval I is assumed to lie between the intervals Ij and Ij+1 whichare assumed to form an increasing monotonicity run. The levels over these intervals are given by y + c1 −c2 ± · · · − cj= : y+Cj and y+ c1 − c2 ± · · ·+ cj+1= : y+Cj+1, respectively. Over interval I the replacementfunction is then de6ned by the value

yi :=

y + Cj; if y0

i ¡y + Cj

y0i ; if y + Cj6y0

i 6y + Cj+1

y + Cj+1; if y0i ¿y + Cj+1:

In case the monotonicity run between Ij and Ij+1 were decreasing, the levels over these intervals are givenby y + c1 − c2 ± · · ·+ cj= : y + Cj and y + c1 − c2 ± · · · − cj+1= : y + Cj+1 respectively. The replacementvalue is de6ned by

yi :=

y + Cj+1; if y0

i ¡y + Cj+1

y0i ; ify + Cj+16y0

i 6y + Cj

y + Cj; ify0i ¿y + Cj:

These 6ll-ins lead to accepting all pure optimal quantization levels in the example of Fig. 12, i.e. the secondcase of the foregoing level de6nitions applies to all 6ve 6ll-ins.Whenever during 6ll-in a level of the optimal pure quantization is replaced by an upper or lower bound, one

of the extreme levels of the enclosing monotonicity run is extended. This allows to improve the extreme levelsby minimizing the function F(y; c1; : : : ; ck−1) for a new partition. Fill-in is repeated for the new partition. Thisprocess terminates when 6ll-in does not further change the partition. The latter means that all 6ll-in levelsstrictly lie between their enclosing extreme levels ofrno segments requiring 6ll-in are left.

5.5. Exponential quantization

Exponential quantization has a straightforward solution when restricted to a 6xed partition. An exponentialquantization function is therefore given by sn(x) =

∑n−1i=1 yi1(xi ;xi+1)(x) =

∑n−1i=1 ci−1y1(xi ;xi+1)(x). The squared

Page 19: Constrained quantization

T. K ampke / Signal Processing 83 (2003) 1839–1858 1857

quantization error as a function of the 6rst level y has a unique critical point at

y =

∑n−1i=1 ci−1

∫ xi+1

xif(x) dx∑n−1

i=1 c2(i−1)(xi+1 − xi):

In the special case c=1 this value becomes the mean value of the original function over its whole domain. Areasonable partition for exponential quantization is provided by the partition of a pure optimal quantization.

5.6. Simultaneous quantization

The constraints of simultaneous quantization can be written as

y1 = V1 + y

y2 = 2v1 − y1 = V2 − y

y3 = 2v2 − y2 = V3 + y

...

yn−1 = : : : = Vn−1 + (−1)n+1y;

where V1 := 0 and Vi := 2vi−1 − Vi−1 for i¿ 2. The simultaneous quantization objective can be expressedby a univariate function∫

(x1 ;x2)(v1 − y1)2 dx +

∫(x2 ;x3)

(v1 − y2)2 + (v2 − y2)2 dx

+∫(x3 ;x4)

(v2 − y3)2 + (v3 − y3)2 dx + · · ·+∫(xn−1 ;x n)

(vn−2 − yn−1)2 dx

=∫(x1 ;x2)

(v1 − y)2 dx +∫(x2 ;x3)

(v1 − V2 + y)2 + (v2 − V2 + y)2 dx + · · ·

+∫(xn−1 ;x n)

(vn−2 − Vn−1 + (−1)n+1y)2 dx

= : F(y):

The critical point condition F ′(y) = 0 leads to a linear expression with solution

y =2∑n−2

i=1 (−1)iVi + (−1)n−1Vn−1

n+ 2:

6. Conclusion

Optimal quantization of univariate discrete functions has been shown to be eMciently computable evenwhen endowed with various types of constraints. The underlying minimization technique is that of dynamicprogramming for shortest path construction in the quantization graph obeying all the constraints.Future work will address quantization of multivariate discrete functions with the two-dimensional case

being of particular interest for image approximation in general and image segmentation in particular. Onemajor diMculty of multivariate quantization is entailed by the variety of ways to generalize intervals, theregions where the approximating function is constant, from one to several dimensions.

Page 20: Constrained quantization

1858 T. K ampke / Signal Processing 83 (2003) 1839–1858

References

[1] R. Ahuja, T.L. Magnanti, J.B. Orlin, Network Flows, Prentice-Hall, Englewood Cli3s, 1993.[2] J. Baxter, The canonical distortion measure for vector quantization and function approximation, Proceedings of the 14th International

Conference on Machine Learning, 1997, pp. 39–47.[3] G.C. Cowley, An improved vector quantization algorithm for speech transmission over noisy channels, School of Information Systems,

University of East Anglia, Norwich, 1997.[4] J. Frankin, Methods of Mathematical Economics, Springer, New York, 1980.[5] A. Gersho, R.M. Gray, Vector Quantization and Signal Compression, Kluver, Boston, 1992.[6] P.E. Gill, W. Murray, M.H. Wright, Practical Optimization, Academic Press, London, 1992.[7] J. HollmWen, V. Tresp, O. Simula, A learning vector quantization algorithm for probabilistic models, Proceedings European Signal

Processing Conference EUSIPCO, Vol. II, 2000, pp. 721–724.[8] T. K&ampke, Optimal and near optimal quantization of integrable functions, Comput. Math. Appl. 40 (2000) 1315–1347.[9] T. K&ampke, EMcient quantization algorithms for discrete functions, J. Interdisciplinary Math. 7 (2003), to appear.[10] T. K&ampke, A. Elfes, Estimation of superresolution images: the one-dimensional case, Proceedings of the 15th International

Conference on Pattern Recognition ICPR, Barcelona, Vol. 1, 2000, pp. 584–587.[11] T. K&ampke, R. Kober, Nonparametric image segmentation, Pattern Anal. Appl. 1 (1998) 145–154.[12] T. K&ampke, R. Kober, Discrete signal quantization, Pattern Recognition 32 (1999) 619–634.[13] D. Mumford, J. Shah, Optimal approximations by piecewise smooth functions and associated variational problems, Comm. Pure

Appl. Math. 42 (1989) 577–685.[14] P.S. Vassilewski, J.G. Wade, A comparison of multilevel methods for total variation regularization, Electron. Trans. Numer. Anal.

6 (1997) 255–270.