5
LLE Algorithm in Natural Image Matting Junbin Gao, David Tien, Member, IEEE, and Jim Tulip Abstract—Accurately extracting foreground objects in images and video has wide applications in digital photography. These kind of problems are referred to as image matting. The most re- cent work [1] in natural image matting relies on local smoothness assumptions about foreground and background colours on which a cost function has been established. The closed-form solution has been derived based on a certain degree of user inputs. In this paper, we present a framework for formulating a new cost function from the manifold learning perspective based on the so- called Locally Linear Embedding [2] where the local smoothness assumptions have been replaced by an implicit manifold structure defined in local colour spaces. We illustrate our new algorithm using the standard benchmark images and very comparable results have been obtained. Index Terms—Image Matting, Locally Linear Embedding, Man- ifold Learning, Alpha Matte, Nesterov’s Method I. I NTRODUCTION As usual, we refers to the problem of softly extracting the foreground object from a single image as image matting. These kind of problems are important in both computer vision and graphics applications. Image matting has been widely used in film production applications and is a key technique in image/video editing. The past twenty years have witnessed rapid development in digital matting approaches. A 2007 survey article [3] provided a comprehensive review of existing image and video matting algorithms and systems, with an emphasis on the advanced techniques that have been recently proposed. Most of existing digital matting techniques depend on the so-called alpha matte. Mathematically, the alpha matte is based on the following model assumption I = αF + (1 - α)B (1) where α, I, F and B are the alpha matte value, image colour value, the foreground image colour value and the background colour value, respectively, at any given pixel. The alpha matte value α is assumed to be 0 or 1 (hard matte value) which clearly distinguish foreground and background pixels. However it would be convenient to assume that α lies between 0 and 1 as soft matte. In a matting problem, the goal is to estimate α, F and B simultaneously, given the image pixel information I for all the pixels. Obviously this is a severely ill-posed problem. Many existing matting algorithms and systems require a certain Junbin Gao is with School of Computing and Mathematics, Charles Sturt University, Bathurst, NSW 2795, Australia; email: [email protected] David Tien is with School of Computing and Mathematics, Charles Sturt University, Bathurst, NSW 2795, Australia; email: [email protected] Jim Tulip is with School of Computing and Mathematics, Charles Sturt University, Bathurst, NSW 2795, Australia; email: [email protected] degree of user interaction to extract a good matte. For example, a so-called trimap is usually supplied to matting algorithms or systems, which indicates definite foreground, definite back- ground and unknown regions. Usually the unknown regions are around the boundary of the objects to be extracted. To infer the alpha matte value in the regions, Bayesian matting [4] uses a progressively moving window marching inward from the known regions. Bai and Sapiro [5] proposed calculating alpha matte through the geodesic distance from a pixel to the known regions. Among the propagation-based approaches, the Poisson matting algorithm [6] assumes the foreground and background colours are smooth in a narrow band of unknown pixels, then solves a homogenous Laplacian matrix; A similar algorithm is proposed in [7] based on Random Walks. The Closed-form matting approach was proposed by Levin et al. [1] via introducing the matting Laplacian matrix under the assumption that foreground and background colours can be fit with linear models in local windows, which leads to a quadratic cost function in alpha that can be minimized globally. Minimizing the quadratic cost function is equivalent to solving a linear system which is often time-consuming when the image size is large. In a recent work [8], He et al. proposed a fast matting scheme by using large kernel matting Laplacian matrices. Review and assessment of more matting techniques can be found at the alpha matting evaluation website http://www.alphamatting.com/. In this paper, we consider using traditional manifold learn- ing algorithms [9], [10] such as Locally Linear Embedding (LLE) [2] to solve the problem of image matting. As has been done in [1], we still use the so-called user-specified constraints (such as scribbles or a bounding rectangle). Similar approaches can also be found in [1], [11], [12]. In deriving the matting Laplacian matrices [1], the authors have used the assumption that the foreground (or background) colours in a local window lie on a single line in the RBG colour space. This assumption is going to be naturally replaced with the smooth manifold assumption which is purely a mathematical assumption in learning a smooth manifold. Hence, a locally linear learning model such as LLE can be utilized to capture such linear information which in turn assists matte information learning. One of innovations offered by our approach is to take a natural definition of pixel neighborhood on the colour manifold as the neighborhood used in most standard manifold learning algorithms such as LLE. It is hoped that the idea of using manifold learning or a dimensionality reduction approach for image matting as offered in this paper will inspire more exploration in this direction. The paper is organized as follows. Section II simply intro- duces the Locally Linear Embedding (LLE) manifold learning The 7th International Conference on Information Technology and Applications (ICITA 2011) 308

The 7th International Conference on Information Technology ...using the standard benchmark images and very comparable results have been obtained. Index Terms—Image Matting, Locally

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: The 7th International Conference on Information Technology ...using the standard benchmark images and very comparable results have been obtained. Index Terms—Image Matting, Locally

LLE Algorithm in Natural Image MattingJunbin Gao, David Tien, Member, IEEE, and Jim Tulip

Abstract—Accurately extracting foreground objects in imagesand video has wide applications in digital photography. Thesekind of problems are referred to as image matting. The most re-cent work [1] in natural image matting relies on local smoothnessassumptions about foreground and background colours on whicha cost function has been established. The closed-form solutionhas been derived based on a certain degree of user inputs. Inthis paper, we present a framework for formulating a new costfunction from the manifold learning perspective based on the so-called Locally Linear Embedding [2] where the local smoothnessassumptions have been replaced by an implicit manifold structuredefined in local colour spaces. We illustrate our new algorithmusing the standard benchmark images and very comparableresults have been obtained.

Index Terms—Image Matting, Locally Linear Embedding, Man-ifold Learning, Alpha Matte, Nesterov’s Method

I. INTRODUCTION

As usual, we refers to the problem of softly extracting theforeground object from a single image as image matting. Thesekind of problems are important in both computer vision andgraphics applications. Image matting has been widely usedin film production applications and is a key technique inimage/video editing. The past twenty years have witnessedrapid development in digital matting approaches. A 2007survey article [3] provided a comprehensive review of existingimage and video matting algorithms and systems, with anemphasis on the advanced techniques that have been recentlyproposed.

Most of existing digital matting techniques depend on theso-called alpha matte. Mathematically, the alpha matte is basedon the following model assumption

I = αF + (1− α)B (1)

where α, I, F and B are the alpha matte value, image colourvalue, the foreground image colour value and the backgroundcolour value, respectively, at any given pixel. The alphamatte value α is assumed to be 0 or 1 (hard matte value)which clearly distinguish foreground and background pixels.However it would be convenient to assume that α lies between0 and 1 as soft matte.

In a matting problem, the goal is to estimate α, F and Bsimultaneously, given the image pixel information I for all thepixels. Obviously this is a severely ill-posed problem. Manyexisting matting algorithms and systems require a certain

Junbin Gao is with School of Computing and Mathematics, Charles SturtUniversity, Bathurst, NSW 2795, Australia; email: [email protected]

David Tien is with School of Computing and Mathematics, Charles SturtUniversity, Bathurst, NSW 2795, Australia; email: [email protected]

Jim Tulip is with School of Computing and Mathematics, Charles SturtUniversity, Bathurst, NSW 2795, Australia; email: [email protected]

degree of user interaction to extract a good matte. For example,a so-called trimap is usually supplied to matting algorithms orsystems, which indicates definite foreground, definite back-ground and unknown regions. Usually the unknown regionsare around the boundary of the objects to be extracted. Toinfer the alpha matte value in the regions, Bayesian matting [4]uses a progressively moving window marching inward fromthe known regions. Bai and Sapiro [5] proposed calculatingalpha matte through the geodesic distance from a pixel tothe known regions. Among the propagation-based approaches,the Poisson matting algorithm [6] assumes the foregroundand background colours are smooth in a narrow band ofunknown pixels, then solves a homogenous Laplacian matrix;A similar algorithm is proposed in [7] based on RandomWalks. The Closed-form matting approach was proposed byLevin et al. [1] via introducing the matting Laplacian matrixunder the assumption that foreground and background colourscan be fit with linear models in local windows, which leadsto a quadratic cost function in alpha that can be minimizedglobally. Minimizing the quadratic cost function is equivalentto solving a linear system which is often time-consumingwhen the image size is large. In a recent work [8], He et al.proposed a fast matting scheme by using large kernel mattingLaplacian matrices. Review and assessment of more mattingtechniques can be found at the alpha matting evaluationwebsite http://www.alphamatting.com/.

In this paper, we consider using traditional manifold learn-ing algorithms [9], [10] such as Locally Linear Embedding(LLE) [2] to solve the problem of image matting. As has beendone in [1], we still use the so-called user-specified constraints(such as scribbles or a bounding rectangle). Similar approachescan also be found in [1], [11], [12]. In deriving the mattingLaplacian matrices [1], the authors have used the assumptionthat the foreground (or background) colours in a local windowlie on a single line in the RBG colour space. This assumptionis going to be naturally replaced with the smooth manifoldassumption which is purely a mathematical assumption inlearning a smooth manifold. Hence, a locally linear learningmodel such as LLE can be utilized to capture such linearinformation which in turn assists matte information learning.One of innovations offered by our approach is to take a naturaldefinition of pixel neighborhood on the colour manifold asthe neighborhood used in most standard manifold learningalgorithms such as LLE. It is hoped that the idea of usingmanifold learning or a dimensionality reduction approach forimage matting as offered in this paper will inspire moreexploration in this direction.

The paper is organized as follows. Section II simply intro-duces the Locally Linear Embedding (LLE) manifold learning

The 7th International Conference on Information Technology and Applications (ICITA 2011)

308

Page 2: The 7th International Conference on Information Technology ...using the standard benchmark images and very comparable results have been obtained. Index Terms—Image Matting, Locally

algorithm [2], [10]. Section III is dedicated to formulatingthe new algorithms based on LLE. In Section IV, we presentseveral examples of using the new image matting approachover the benchmark images, see [1]. We make our conclu-sions in Section V with suggestions for further exploring theapplication of dimensionality reduction algorithms in imagematting.

II. LOCALLY LINEAR EMBEDDING ALIGNMENT MATRIX

The locally linear embedding (LLE) algorithm has beenrecently proposed as a powerful eigenvector method for theproblem of nonlinear dimensionality reduction [2]. The keyassumption related to LLE is that, even if the manifoldembedded in a high-dimensional space is nonlinear whenconsidered as a whole, it still can be assumed to be locallylinear if each data point and its neighbours lie on or closeto a locally linear patch of the manifold. LLE maps its inputsinto a single global coordinate system of lower dimensionality,and its optimizations do not involve local minima. In the lastdecade, many LLE based algorithms have been developedand introduced in the machine learning community: kernelizedLLE (KLLE) [13], Laplacian Eigenmap (LEM) [14], HessianLLE (HLLE) [15], robust LLE [16], weighted LLE [17]enhanced LLE [18] and supervised LLE [19].

By exploiting the local symmetries of linear reconstruc-tions, LLE is able to learn the global structure of nonlinearmanifolds, such as those generated by images of faces, ordocuments of text. The LLE method is also widely usedfor data visualization [2], classification [17], [19] and faultdetection [20].

Because of the assumption that local patches are linear, thatis, each of them can be approximated by a linear hyperplane,each data point can be represented by a weighted linearcombination of its nearest neighbours (different ways canbe used in defining the neighbourhood). Coefficients of thisapproximation characterize local geometries in a highdimen-sional space, and they are then used to find low-dimensionalembeddings preserving the geometries in a low-dimensionalspace. The main point in replacing the nonlinear manifoldwith the linear hyperplanes is that this operation does notbring significant error, because, when locally analyzed, thecurvature of the manifold is not large, i.e., the manifold canbe considered to be locally flat. The result of LLE is a singleglobal coordinate system.

The argument that we take for using the LLE for imagematting is that, like the application of the Laplacian alignmentmatrix used in [1], local linear structure among the colour in-formation over a local window can be learnt and transferred tothe matting value space. This observation suggests intuitivelythat the LLE alignment matrix could be successful in imagematting.

In the following sections, we still use Xi = Iij|ij ∈

wi, j = 1, ...,K to denote the subset of colour vectors overa local window pixels of pixel i. Note that the pixel Ii iscontained in Xi. Under the LLE assumption, the colour vectorIi at pixel i can be approximated by a linear combination wij

(the so-called reconstruction weights) of its K − 1 nearestneighbours Xi\Ii. Hence, LLE fits a hyperplane throughIi and its nearest neighbors in the colour manifold definedover the image pixels. The fitting is achieved by solving thefollowing optimal problem

minW

∑i

‖Ii −K∑

j=1

wijIij‖2

under the condition∑K

j=1 wij = 1. For the sake of simplicity,we assume that wij = 0 when Iij = Ii. Assuming that themanifold is locally linear, the local linearity may be preservedin the space of matting values. Once the weights W havebeen determined, the matting values α can be determined byminimizing the following objective function

minα

F (α) =∑

i

‖αi −K∑

j=1

wijαij‖2 = αT Rα (2)

where R = (E−W )T (E−W ) is called LLE alignment matrixand E is the identity matrix.

In the standard LLE algorithm, (2) is usually solved with ad-ditional constrained conditions like ‖α‖2 = 1 which results inan eigenvector problem. Instead of such standard constraints,in image matting we would like to formulate a matting solutionby including appropriate constraints. We will discuss this issuein the next section.

III. ALGORITHM FORMULATION BASED ON LLE

A. Formulation

In terms of machine learning terminology, the image mattingis a unsupervised learning problem. As we pointed out in theintroduction, one of major approaches in image matting is touse the so-called trimap or user-scribbles. The informationgiven in a trimap or from user-scribbles is that for a groupof pixels over an image, their alpha matte are known.

For this purpose, let Ω be a subset of pixels and ΓΩ be theoperator mapping alpha matte α to the given alpha mattingvalues α0 (trimap and user-scribbles) over Ω, then the mattingproblem is to minimize the following objective function withrespect to α such that ΓΩ(α) = α0

minΓΩ(α)=α0

F (α) = αT Rα.

where R is the LLE alignment matrix.To get a smoother alpha matte α, we propose to formulate

two optimization problems for our image matting1) Scribble Smoothing [1]: Instead of using hard scribble

constraints, we seek for α by smoothing the mattingvalue using

minα

αRαT + λ(α−αΩ)DΩ(αT −αTΩ) (3)

where λ is some large regularizer, DΩ is a diagonalmatrix whose diagonal elements are one for constrainedpixels (in trimap or user-scribbles) and zero for all otherpixels, and αΩ is the vector containing the specified

309

Page 3: The 7th International Conference on Information Technology ...using the standard benchmark images and very comparable results have been obtained. Index Terms—Image Matting, Locally

alpha values for the constrained pixels and zero for allother pixels.

2) Constrained-Scribble Smoothing: In this formulation, wewill constrain α to its soft range 0 ≤ αi ≤ 1

min0α1

αRαT + λ(α−αΩ)DΩ(αT −αTΩ) (4)

where 0 α 1 means the elements of the vector αare between 0 and 1.

B. Algorithms

It is easy to solve the optimization problem (3) for the scrib-ble smoothing as it is an unconstrained quadratic programmingproblem. To reenforce the user scribble conditions αΩ overpixels Ω, the algorithm in [1] takes a larger regularizer suchas λ = 100. Thus the objective function can be optimized bysolving a sparse linear system:

(R + λDΩ)α = λαΩ. (5)

However solving the optimization problem defined in (4)is much harder as it is a quadratic programming problemwith a set of linear constraints. One of approaches to solvea quadratic programming problem is to use an interior pointmethod that uses Newton-like iterations to find a solution ofthe Karush-Kuhn-Tucker conditions of the primal and dualproblems. The computational complexity is polynomial timeof the size of the matrix R. Even for a moderately sized image,it is intractable for one to use a standard quadratic optimizationalgorithm as the number of variables is equal to the number ofpixels in the whole image. Fortunately given the special formof the constraints involved in the problem, a trust-region alikealgorithm as proposed in [21] can be used for the optimizationproblem defined in (4). The algorithm has a fast convergencerate of O(1/k2) where k is the number of iterations. However,the trust-region alike algorithm is very expensive because thesecond order derivative of the objective function is needed.

However we also note that the problem is convex andsmooth. In the following sections, we propose applying theoptimal first-order black-box method for smooth convex op-timization, i.e., Nesterov’s method [22]–[24], to achieve aconvergence rate of O(1/k2). We first construct the followingmodel for approximating the objective function F (α) in (4)at the point α,

hC,α(α) = F (α) + F ′(α)(α−α)T +C

2‖α−α‖2, (6)

where C > 0 is a constant.With model (6), Nesterov’s method is based on two se-

quences αk and sk in which αk is the sequence ofapproximate solutions while sk is the sequence of searchpoints. The search point sk is the convex linear combinationof αk−1 and αk as

sk = αk + βk(αk −αk−1)

where βk is a properly chosen coefficient. The approximatesolution αk+1 is computed as the minimizer of hCk,sk

(α). It

can be proved that

αk+1 = argmin0α1

Ck

2

∥∥∥∥α−(

sk −1

CkF ′(sk)

)∥∥∥∥2

(7)

where Ck is determined by line search according to theArmijo-Goldstein rule so that Ck should be appropriate for sk,(see [24]). αk+1 defined by (7) is actually the projection ofthe vector sk− 1

CkF ′(sk) over the convex set α|0 α 1.

We can easily work out the projection given by the followingformula

αk+1 = max

0,min

1, sk −1

CkF ′(sk)

. (8)

where both max and min operate over vectors component-wisely as the same meaning in Matlab.

An efficient algorithm for (4) is summarized as

Algorithm 1 The Efficient Nesterov’s AlgorithmInput: C0 > 0 and α0, KOutput: αk+1

1: Initialize α1 = α0, γ−1 = 0, γ0 = 1 and C = C0.2: for k = 1 to K do3: Set βk = γk−2−1

γk−1, sk = αk + βk(αk −αk−1)

4: Find the smallest C = Ck−1, 2Ck−1, ... such that

F (αk+1) ≤ hC,sk(αk+1),

where αk+1 is defined by (8).

5: Set Ck = C and γk+1 =1+√

1+4γ2k

26: end for

C. Reconstruction of Foreground and Background Images

After solving for the alpha values α, we need to reconstructforeground F and background B. For this purpose, we takethe same strategy as [1] to reconstruct F and B by using thecomposition equation (1) with certain smoothness priors onboth F and B. F and B are obtained from optimizing thefollowing objective function

minF,B

∑i

‖αiFi +(1−αi)Bi − Ii‖2 + 〈∂αi, (∂Fi)2 +(∂Bi)2〉

where ∂ is the gradient operator over the image grid. For afixed A, the problem is quadratic and its minimum can befound by solving a set of linear equations.

IV. EXPERIMENT RESULTS

All the algorithms are implemented using MATLAB ona small workstation machine with 32G memory. In all thecalculations, we set λ = 100, see (5), in both algorithms.

A. Experiment I

In this experiment, we aim to compare the performance ofour newly suggested LLE alignment matrix with the Laplacianalignment matrix introduced in [1] for image matting, underthe Scribble Smoothing formulation. Similar to the closed-form solution, we solve (5) for matte values.

310

Page 4: The 7th International Conference on Information Technology ...using the standard benchmark images and very comparable results have been obtained. Index Terms—Image Matting, Locally

Fig. 1. Images and Strokes

For this experiment, two images from the original paperdescribing the closed form solution for image matting [1] areused for tests. Figure 1 presents the images and their strokedimages as used in the algorithm comparison.

Figure 2 presents matting results on the two images, re-spectively. In Figure 2, the first and third columns show theresults from the closed-form solution while the second row andthe fourth row show the results from the Scribble Smoothingformulation based on the LLE alignment matrix. The resultsgiven by the LLE alignment matrix look visually comparableto the results in [1].

Fig. 2. Mattes from strokes with the Closed-form solution for the Laplacianalignment matrix (the first and third columns) and the Scribble Smoothingfor the LLE alignment matrix (the second and fourth columns): (a) LearnedMasks (the first row). (b) Reconstructed foreground images (the second row).(c) Extracted background images (the third row).

B. Experiment II

In this experiment, the primary goal is to assess the perfor-mance of the two formulations: Scribble Smoothing (SS) andConstrained-Scribble Smoothing (CSS) presented in this paper.For this purpose, we use benchmark test images taken from thematting website http://www.alphamatting.com. On the website,there are four types of test images. The images that we are us-ing are (A) plastic bag (Highly Transparent); (B) troll(Strongly Transparent); (C) donkey (Medium Transparent)and (D) pineapple (Little Transparent). The four originalimages and their strokes used in the experiment are displayedin Figure 3.

The results for the highly transparent image plasticbag and the strongly transparent image troll are shownin Figure 4 in which the first and third columns are for the SS

(a) Plastic Bag (b) Doll

(c) Donkey (d) Pineapple

Fig. 3. Images and Strokes: (a) Highly Transparent. (b) Strongly Transparent.(c) Medium Transparent. (d) Little Transparent

formulation while the second and fourth columns are for theCSS formulations. The first row shows the learnt alpha mattefor each image and formulation while the second row andthe third row show the extracted foreground and backgroundimages.

It is obvious that, for the highly transparent plastic bag, thelearnt alpha matte from the CSS formulation is much betterthan the one from the SS formulation while for the stronglytransparent troll the results from both formulation schemes arecomparable to each other.

Fig. 4. Learning Results for the Highly and Strongly Transparent Images

Similarly we have shown the results for both themedium transparent donkey image and the little transparentpineapple image in Figure 5.

From this group of results, we can conclude that theperformance of both formulation schemes are comparableto each other, but the CSS formulation slightly outperformsthe SS formulation as can be observed from the part ofbackground that has been absorbed into the foreground bythe SS formulation, e.g., the green spot on left upper cornerof the pineapple image.

Actually the matting problem for all the images used inthis experiment is very challenging. For example, the colourinformation of the background and foreground in the pineappleimage is quite similar. In this case, we note that the CSSformulation performs much better than the SS formulation

311

Page 5: The 7th International Conference on Information Technology ...using the standard benchmark images and very comparable results have been obtained. Index Terms—Image Matting, Locally

Fig. 5. Learning Results for the Medium and Little Transparent Images

scheme.

V. CONCLUSION

This paper proposes two formulations for image mattingbased on the classical LLE alignment matrix. The experi-ments have demonstrated both formulations are comparableto each other while in many cases the CSS formulation isslightly better than the SS formulation. The experiment alsodemonstrated that the SS formulation based on the LLE align-ment matrix is comparable to the closed-form solution of theLaplacian alignment matrix. A similar approach [25] has beenused for the Local Tangent Space Alignment (LTSA) whichis an unsupervised learning algorithm that computes low-dimensional, neighbourhood-preserving embeddings of high-dimensional inputs, see [26]. The argument used in [25] toapply the LTSA algorithm for image matting is that, likethe application of the LLE approach, local linear structureamong the colour information over a local window can betransferred to the matting value space. This kind of observationis intuitively vert encouraging for manifold learning methodsto be successful in image matting.

ACKNOWLEDGMENT

The authors are partially supported by Charles Sturt Univer-sity Competitive Research Grants OPA 4818 and OPA 4356.

REFERENCES

[1] A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to naturalimage matting,” IEEE Transactions on Pattern Analysis and MachineIntelligence, vol. 30, no. 2, pp. 228–242, 2008.

[2] S. T. Roweis and L. K. Saul, “Nonlinear dimensionality reduction bylocally linear embedding,” Science, vol. 290, pp. 2323–2326, 2000.

[3] J. Wang and M. F. Cohen, “Image and video matting: A survey,”Foundations and Trends in Computer Graphics and Vision, vol. 3, no. 2,pp. 97–175, 2007.

[4] Y. Chuang, B. Curless, D. Salesin, and R. Szeliski, “A Bayesian approachto digital matting,” in Proceedings of CVPR, vol. 2, 2001, pp. 264–271.

[5] X. Bai and G. Sapiro, “A geodesic framework for fast interactive imageand video segmentation and matting,” in Proceedings of InternationalConference on Computer Vision, 2007, pp. 1–8.

[6] J. Sun, J. Jia, C. K. Tang, and H. Y. Shum, “Poisson matting,” ACMTrans. Graph., vol. 23, no. 3, pp. 315–321, 2004.

[7] L. Grady, T. Schiwietz, and S. Aharon, “Random walks for interactivealpha-matting,” in VIIP, 2005.

[8] K. He, J. Sun, and X. Tang, “Fast matting using large kernel mattingLaplacian matrices,” in Proceedings of CVPR 2010, 2010, pp. 2165–2172.

[9] L. van der Maaten, E. O. Postma, and H. van den Herick, “Dimension-ality reduction: A comparative review,” Tilburg University, TechnicalReport TiCC-TR 2009-005, 2009.

[10] N. D. Lawrence, “A unifying probabilistic perspective for spectraldimensionality reduction,” Sheffield Institute for Translational Neuro-science, University of Sheffield, Tech. Rep., 2010, arXiv:1010.4830v1.

[11] Y. Li, J. Sun, C. K. Tang, and H. Y. Shum, “Lazy snapping,” ACMTransactiona on Graphics, vol. 23, no. 3, pp. 303–308, 2004.

[12] C. Rother, V. Kolmogorov, and A. Blake, “Grabcut: Interactive fore-ground extraction using iterated graph cuts,” ACM Transactions onGraphics, vol. 23, no. 3, pp. 309–314, 2004.

[13] D. DeCoste, “Visualizing mercer kernel feature spaces via kernelizedlocally-linear embeddings,” in Proc. of the Eighth International Confer-ence on Neural Information Processing, 2001.

[14] M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionalityreduction and data representation,” Neural Computation, vol. 15, pp.1373–1396, 2003.

[15] D. L. Donoho and C. Grimes, “Hessian eigenmaps: locally linearembedding techniques for high-dimensional data,” in Proceedings of theNational Academy of Arts and Sciences, vol. 10, 2003, p. 55915596.

[16] H. Chang and D. Yeung, “Robust locally linear embedding,” PatternRecognition, vol. 39, pp. 1053–1065, 2006.

[17] Y. Pan, S. Ge, and A. Al-Mamun, “Weighted locally linear embeddingfor dimension reduction,” Pattern Recognition, vol. 42, pp. 798–811,2009.

[18] S. Q. Zhang, “Enhanced supervised locally linear embedding,” PatternRecognition Letters, vol. 30, no. 13, pp. 1208–1218, 2009.

[19] L. Zhao and Z. Zhang, “Supervised locally linear embedding withprobability-based distance for classification,” Computers & Mathematicswith Applications, vol. 57, no. 6, pp. 919–926, 2009.

[20] W. L. Zheng, N. Ru, and F. H. Yao, “Locally linear embedding for faultdiagnosis,” Advanced Materials Research, vol. 139-141, pp. 2599–2602,2010.

[21] C. J. Lin and J. J. More, “Newton’s method for large bound-constrainedoptimization problems,” SIAM Journal on Optimization, vol. 9, pp.1100–1127, 1999.

[22] Y. Nesterov, “A method for solving a convex programming problemwith convergence rate o(1/k2),” Soviet Math. Dokl., vol. 27, pp. 372–376, 1983. [Online]. Available: http://www.core.ucl.ac.be/∼nesterov/Research/Papers/DAN83.pdf

[23] ——, “Gradient methods for minimizing composite objective function,”Universite catholique de Louvain, Belgium, CORE DISCUSSIONPAPER 76, 2007. [Online]. Available: http://www.uclouvain.be/cps/ucl/doc/core/documents/Composit.pdf

[24] A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algo-rithm for linear inverse problems,” SIAM Journal of Imaging Sciences,vol. 2, no. 1, pp. 183–202, 2009.

[25] J. Gao and J. Liu, “The image matting method with zero-one regular-ization,” in to be submitted, 2011.

[26] Z. Zhang and H. Zha, “Principal manifolds and nonlinear dimensionreduction via local tangent space alignment,” SIAM Journal of ScientificComputing, vol. 26, pp. 313–338, 2004.

312