10
COMPUTER ANIMATION AND VIRTUAL WORLDS Comp. Anim. Virtual Worlds 2015; 26:291–300 Published online 4 May 2015 in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/cav.1653 SPECIAL ISSUE PAPER Garment capture from a photograph Moon-Hwan Jeong*, Dong-Hoon Han and Hyeong-Seok Ko* Department of Electrical and Computer Engineering, Graphics and Media Lab, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, Korea ABSTRACT This paper presents a new method which can create the virtual garment from a single photograph of a real garment put on to the mannequin. In solving this problem, we obtain the insight from the pattern drafting theory in the clothing field. We abstract the drafting process into a computer module, which takes the garment type and primary body sizes then produces the draft as the output. Then the problem is reduced to find out the garment type and primary body sizes. We find that information by analyzing the silhouette of the garment with respect to the mannequin. The method works robustly and produces practically usable virtual clothes that can be used for the graphical coordination. Copyright © 2015 John Wiley & Sons, Ltd. KEYWORDS garment modeling; photograph; pattern-drafting theory; mesh generation; virtual garment; texture extraction *Correspondence Moon-Hwan Jeong and Hyeong-Seok Ko, Graphics and Media Lab-Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, Korea. E-mail: [email protected]; [email protected] 1. INTRODUCTION Creation of virtual garments is demanded from various applications. This paper notes that such demand arises also from the consumers at home who would like to graphically coordinate the clothes in her closet to her own avatar. For that purpose, the existing garments need to be converted to virtual garments. For the consumer, using the computer-aided design (CAD) programs for digitizing (i.e., identifying and cre- ating the comprising cloth panels, positioning the panels around the body, defining the seams, extracting and map- ping the textures, and then draping on the avatar) her clothing collection is practically out of question. That job is difficult and cumbersome even for the clothing experts. This paper proposes a new method to instantly create the virtual garment from a single photograph of the existing garment put on to the mannequin, the setup of which is shown in Figure 1. Millimeter-scale accuracy in the sewing pattern is not the quality this method promises. From insufficient infor- mation, the method aims to create practically usable clothes that are just sufficient for the graphical outfit coor- dination. For the earlier purpose, the proposed method is very successful. As Figure 2 and other reported results demonstrate, the method creates practically usable clothes and works very robustly. We attribute the earlier success to the following two novel approaches this paper takes: (i) silhouette-based and (ii) pattern-based. The use of vision-based techniques is not new in the context of virtual garment creation. Instead of trying to analyze the interior of the foreground, how- ever, this paper devises a garment creation algorithm that utilizes only the silhouette, which can be captured much more robustly. This robustness trades-off with the fore- ground details such as buttons or collars, but we give them up in this paper to obtain a practically usable technique. Another bifurcation this paper makes is, instead of working directly in the 3D-shape space, it works in the 2D-pattern space. In fact, our method is based on the pattern drafting theory, which is well established in the conventional pattern-making study [1]. The pro- posed method is different from sketch or photograph-based shape-in-3D-then-flatten approaches in that it does not call for flattening of the 3D surfaces. Flattening of a triangular mesh cannot be performed in the theoretical (differential-geometrical) sense thus inevitably introduces errors, which emerge as unnaturalness to keen human eyes. Our method’s obviation of the flattening significantly contributes to producing more realistic results. Because it is based on pattern drafting, our work is applicable only to the types of garments whose drafting is already acquired. In this work, the goal of which is to demonstrate the potential of the proposed approach; we Copyright © 2015 John Wiley & Sons, Ltd. 291

Garment capture from a photograph - SNUgraphics.snu.ac.kr/publications/journals/2015-Jeong.pdfdraping simulation on the 3D mannequin model. limit the scope to simple casual designs

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Garment capture from a photograph - SNUgraphics.snu.ac.kr/publications/journals/2015-Jeong.pdfdraping simulation on the 3D mannequin model. limit the scope to simple casual designs

COMPUTER ANIMATION AND VIRTUAL WORLDSComp. Anim. Virtual Worlds 2015; 26:291–300

Published online 4 May 2015 in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/cav.1653

SPECIAL ISSUE PAPER

Garment capture from a photographMoon-Hwan Jeong*, Dong-Hoon Han and Hyeong-Seok Ko*

Department of Electrical and Computer Engineering, Graphics and Media Lab, Seoul National University, 1 Gwanak-ro, Gwanak-gu,Seoul, Korea

ABSTRACT

This paper presents a new method which can create the virtual garment from a single photograph of a real garment put onto the mannequin. In solving this problem, we obtain the insight from the pattern drafting theory in the clothing field. Weabstract the drafting process into a computer module, which takes the garment type and primary body sizes then producesthe draft as the output. Then the problem is reduced to find out the garment type and primary body sizes. We find thatinformation by analyzing the silhouette of the garment with respect to the mannequin. The method works robustly andproduces practically usable virtual clothes that can be used for the graphical coordination. Copyright © 2015 John Wiley& Sons, Ltd.

KEYWORDS

garment modeling; photograph; pattern-drafting theory; mesh generation; virtual garment; texture extraction

*Correspondence

Moon-Hwan Jeong and Hyeong-Seok Ko, Graphics and Media Lab-Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, Korea.E-mail: [email protected]; [email protected]

1. INTRODUCTION

Creation of virtual garments is demanded from variousapplications. This paper notes that such demand arises alsofrom the consumers at home who would like to graphicallycoordinate the clothes in her closet to her own avatar. Forthat purpose, the existing garments need to be converted tovirtual garments.

For the consumer, using the computer-aided design(CAD) programs for digitizing (i.e., identifying and cre-ating the comprising cloth panels, positioning the panelsaround the body, defining the seams, extracting and map-ping the textures, and then draping on the avatar) herclothing collection is practically out of question. That jobis difficult and cumbersome even for the clothing experts.This paper proposes a new method to instantly create thevirtual garment from a single photograph of the existinggarment put on to the mannequin, the setup of which isshown in Figure 1.

Millimeter-scale accuracy in the sewing pattern is notthe quality this method promises. From insufficient infor-mation, the method aims to create practically usableclothes that are just sufficient for the graphical outfit coor-dination. For the earlier purpose, the proposed method isvery successful. As Figure 2 and other reported resultsdemonstrate, the method creates practically usable clothesand works very robustly.

We attribute the earlier success to the following twonovel approaches this paper takes: (i) silhouette-based and(ii) pattern-based. The use of vision-based techniques isnot new in the context of virtual garment creation. Insteadof trying to analyze the interior of the foreground, how-ever, this paper devises a garment creation algorithm thatutilizes only the silhouette, which can be captured muchmore robustly. This robustness trades-off with the fore-ground details such as buttons or collars, but we give themup in this paper to obtain a practically usable technique.

Another bifurcation this paper makes is, instead ofworking directly in the 3D-shape space, it works inthe 2D-pattern space. In fact, our method is based onthe pattern drafting theory, which is well establishedin the conventional pattern-making study [1]. The pro-posed method is different from sketch or photograph-basedshape-in-3D-then-flatten approaches in that it does notcall for flattening of the 3D surfaces. Flattening of atriangular mesh cannot be performed in the theoretical(differential-geometrical) sense thus inevitably introduceserrors, which emerge as unnaturalness to keen humaneyes. Our method’s obviation of the flattening significantlycontributes to producing more realistic results.

Because it is based on pattern drafting, our work isapplicable only to the types of garments whose draftingis already acquired. In this work, the goal of which is todemonstrate the potential of the proposed approach; we

Copyright © 2015 John Wiley & Sons, Ltd. 291

Page 2: Garment capture from a photograph - SNUgraphics.snu.ac.kr/publications/journals/2015-Jeong.pdfdraping simulation on the 3D mannequin model. limit the scope to simple casual designs

Garment capture from a photograph M. H. Jeong, D. H. Han and H. S. Ko

Figure 1. The setup for the garment capture.

(a) (b)

Figure 2. The proposed method GarmCap takes a photograph (a) and produces its 3D virtual garment (b).

292 Comp. Anim. Virtual Worlds 2015; 26:291–300 © 2015 John Wiley & Sons, Ltd.DOI: 10.1002/cav

Page 3: Garment capture from a photograph - SNUgraphics.snu.ac.kr/publications/journals/2015-Jeong.pdfdraping simulation on the 3D mannequin model. limit the scope to simple casual designs

M. H. Jeong, D. H. Han and H. S. Ko Garment capture from a photograph

(a) (b) (c) (d) (e)

(f) (g) (h) (i) (j)

Figure 3. Input photograph (left) versus captured result (right). The captured result was obtained by performing physically baseddraping simulation on the 3D mannequin model.

limit the scope to simple casual designs (shirt, skirt, pants,and one-piece dress) shown in Figure 3.

To summarize the contribution, to our knowledge, theproposed work is the first photograph-based virtual gar-ment creation technique that is based on the patterndrafting.

2. PREVIOUS WORK

In the graphics field, there have been various studies forcreating virtual garments. Turquin et al. [2] proposed asketch-based framework, in which the user sketches the sil-houette lines in 2D with respect to the body, which are thenconverted to the 3D garment. Decaudin et al. [3] proposeda more comprehensive technique that improved Turquin etal.’s work with the developability approximation and geo-metrical modeling of fabric folds. The recent sketch-basedmethod [4] is based on context-aware interpretation ofthe sketch strokes. We note that the earlier techniquesare targeted to novel garment creation, not to capturingexisting garments.

Some researchers used implicit markers (i.e., printedpatterns) in order to capture the 3D shape of the gar-ment [5–7]. Tanie et al. [5] presented a method for cap-turing detailed human motion and garment mesh froma suit covered with the meshes, which are created withretro-reflective tape. Scholz et al. [6] used the garment onwhich a specialized color pattern is printed, which enabledreproduction of the 3D garment shape by establishing thecorrespondence among multi-view images. White et al. [7]used the color pattern of tessellated triangles to capture theoccluded part as well as the folded geometry of the gar-

ment. We note that the earlier techniques are applicableto specially created clothes but not to the clothes in theconsumers’ closet.

A number of marker-free approaches have been alsoproposed for capturing garments from multi-view videocapture [8–11]. Bradley et al. [8] proposed a methodthat is based on the establishment of temporally coherentparameterization between the time-steps. Vlasic et al. [9]performed the skeletal pose estimation of the articulatedfigure, which was then used to estimate the mesh shapeby processing the multi-view silhouettes. Aguiar et al. [10]took the approach of taking the full-body laser scan priorto the video-recording. Then, for each frame of the video,the method recovered the avatar pose and captured thesurface details. Popa et al. [12] proposed a method to rein-troduce high frequency folds, which tend to disappear inthe video-based reconstruction of the garment. We notethat the earlier multi-view techniques call for somewhatprofessional setup for the capture.

Zhou et al. [13] presented a method that generates thegarment from a single image. Because the method assumesthat the garment is symmetric in front part and rear part,it is hard to generate realistic rear part of the garment.The result can be useful if the clothing expert appliessome additional processing, but not quite sufficient for thegraphical coordination of the garments.

3. OVERVIEW

Our virtual garment creation is based on the drafts. Con-ventionally, there exists a draft (note that draft is differentfrom the pattern; a draft is a collection of points and lines

Comp. Anim. Virtual Worlds 2015; 26:291–300 © 2015 John Wiley & Sons, Ltd. 293DOI: 10.1002/cav

Page 4: Garment capture from a photograph - SNUgraphics.snu.ac.kr/publications/journals/2015-Jeong.pdfdraping simulation on the 3D mannequin model. limit the scope to simple casual designs

Garment capture from a photograph M. H. Jeong, D. H. Han and H. S. Ko

Figure 4. The one-piece dress draft, which can be determined from the primary body sizes summarized in Table I.

Table I. The primary body sizes for the one-piecedress draft.

Acronym Meaning

WBL Waist back lengthHL Hip lengthSL Skirt lengthBiSL Bishoulder lengthBP Bust point to bust point lengthBC Bust circumferenceWC Waist circumferenceHC Hip circumference

that are essential for obtaining the patterns or the cloth pan-els) for each garment type. Figure 4 shows a typical draftfor the one-piece dress. The whole set of the panels canbe obtained by symmetrizing, mirroring, or making somevariations to the draft.

We note that in fact the drafting can be performed fromthe input of just a few parameters [14]. For the case ofthe one-piece dress draft shown in Figure 4, the requiredinput parameters are eight sizes, which are summarizedin Table I. We call them the primary body sizes (PBSs).Because this work performs the garment capture in thecontext of pre-acquired drafts, the problem of convertingthe photographed garment to a 3D virtual garment can bereduced to the problem of identifying the garment type andthe PBSs.

Figure 5 overviews the steps of our garment capturetechnique (GarmCap). From the given photograph, it firstextracts the garment silhouette. Based on the garment sil-houette, it identifies the garment type and PBSs, whichenable the creation of the sized draft. Then, it can generatethe comprising panels. Finally, it performs the physicallybased simulation on the 3D mannequin or avatar.

4. GARMENT CAPTURE

This section presents each of the steps overviewed inFigure 5.

4.1. Off-line Photographing Set up

Our photographing setup (Figure 1) consists of a cameraand a mannequin such that the photograph can be takenfrom the front. The positions of both the camera and themannequin are fixed, so that the photographs taken withand without the garment can have pixel-to-pixel corre-spondence. We use the green background screen, whichfacilitates extraction of the foreground objects. In order tominimize the influence caused by the shadow, we tried touse lights of ambient nature as much as possible. We pre-processed the mannequin (scanned, graphically modeled,and stored into an OBJ format file) to obtain its complete

294 Comp. Anim. Virtual Worlds 2015; 26:291–300 © 2015 John Wiley & Sons, Ltd.DOI: 10.1002/cav

Page 5: Garment capture from a photograph - SNUgraphics.snu.ac.kr/publications/journals/2015-Jeong.pdfdraping simulation on the 3D mannequin model. limit the scope to simple casual designs

M. H. Jeong, D. H. Han and H. S. Ko Garment capture from a photograph

Figure 5. The steps the proposed garment capture technique (GarmCap).

(a) (b) (c)

Figure 6. The steps for obtaining the garment silhouette and landmarks: (a) base mask, (b) garment silhouette, (c)mannequin-silhouette landmark points (red) and garment-silhouette landmark points (blue).

3D geometry as well as its PBSs such that we can estab-lish the relationship between real world distance andpixel distance.

4.2. Obtaining the Garment Silhouette

The first step of the GarmCap is the garment silhou-ette extraction, that is based on GrabCut [15] method.We already have the mannequin mask MM obtained fromthe mannequin image. We can obtain the exposed maskME, the non-garment region of the input photograph.Subtracting ME from MM gives us the base mask MB.Figure 6(a) shows the base mask of the input photographin Figure 5. By supplying this base mask, now the Grab-Cut can produce the garment silhouette without any user

interaction. Figure 6(b) shows the garment silhouette takenfrom the input photograph of Figure 5 according to theearlier procedure.

4.3. Identifying the Garment Type

With the garment silhouette extracted in Section 4.2, weidentify the garment type from the choices in the cur-rent garment type database (DB) (shirt, skirt, pants, andone-piece dress) by searching the closest match with

arg minSD

kTSI � SDk (1)

where SI is the input garment silhouette image (e.g.,Figure 6(b)), SD is the silhouette in the DB, and T is atransformation that can take an arbitrary combination of

Comp. Anim. Virtual Worlds 2015; 26:291–300 © 2015 John Wiley & Sons, Ltd. 295DOI: 10.1002/cav

Page 6: Garment capture from a photograph - SNUgraphics.snu.ac.kr/publications/journals/2015-Jeong.pdfdraping simulation on the 3D mannequin model. limit the scope to simple casual designs

Garment capture from a photograph M. H. Jeong, D. H. Han and H. S. Ko

rotation, translation, and scaling (same scales along eachaxis). After the garment type is identified, when needed,we subclassify the type. For example, after a garment isidentified as a skirt, we further subclassify it whether it isA-line or H-line. For the case of the shirt, it is subclassifiedaccording to the sleeve and neckline. The subclassifica-tion is performed in the similar way as described withEquation 1.

4.4. Identifying the Primary Body Sizes

A few points on the surface of the mannequin arepre-registered as the mannequin-silhouette landmarkpoints (MSLPs). Garmcap identifies them and labels themwith red circles as shown in Figure 6(c). Then, GarmCaplabels a few feature points of the photographed garmentwith blue circles as shown in Figure 6(c). We call them thegarment-silhouette landmark points (GSLPs). For the cen-ter waist and bust points, the MSLPs and GSLPs coincide,

thus the red circles are hidden behind the blue ones. But, ingeneral, there can exist some discrepancy. For example, thediscrepancy at the waist left and waist right, although theyare in 2D, informs the ease at the waist. Note that the sleeveends and the skirt end exist only as GSLPs, and indicatethe length of the sleeves and the skirt.

To identify the GSLPs from the garment silhouette,we search the candidate spots of the silhouette imageaccording to

arg minML

kMF �MLk (2)

where MF is one of the filters shown in Figure 7, and ML

is the square fraction of the silhouette image. Note thatthe earlier minimization is not mislead by the local min-ima because the searching is performed around MSLP. Byperforming the earlier search for the silhouette image withthe transformation T in Equation 1 being applied, we donot need to consider the size mismatch here. Now, we

Figure 7. Filters for identifying the garment-silhouette landmark points.

(c) (d) (e)

(a) (b)

Figure 8. Extraction of the texture: (a) original image, (b) triangle mesh, (c) deformed image, (d) deformed mesh, (e) one-repeattexture.

296 Comp. Anim. Virtual Worlds 2015; 26:291–300 © 2015 John Wiley & Sons, Ltd.DOI: 10.1002/cav

Page 7: Garment capture from a photograph - SNUgraphics.snu.ac.kr/publications/journals/2015-Jeong.pdfdraping simulation on the 3D mannequin model. limit the scope to simple casual designs

M. H. Jeong, D. H. Han and H. S. Ko Garment capture from a photograph

can obtain the PBSs of the garment based on the GSLPsidentified earlier. For the circumferences, we reference thegeometry of the scanned mannequin body.

4.5. Texture Extraction

This section describes how we extract one-repeat tex-ture from the input image. Texture is a significant partof the garment without which the captured result wouldlook monotonous. Note that our work is not based onvision-based reconstruction of the original surface, butit reproduces the garment by pattern-based constructionand simulation. In that approach, the conventional textureextraction (i.e., extracting the texture of the whole gar-ment) produces poor results. The proposed method calls forextraction of an undistorted one-repeat texture. We proposea simple texture extraction method that can approximatelyproduce visual impression of the original garment in thelimited cases of regular patterns consisting of straight lines.

We eliminate the distortion first and then extractone-repeat texture from undistorted image. We extractthe lines by applying the sobel filter, then construct a2D triangle mesh based on the extracted lines as shownin Figure 8(b). We apply the deformation transfer tech-nique [16] to straighten the earlier mesh.

To apply the deformation transfer method, we define theaffine transformation T as

T D eVV�1 (3)

for each triangle, where V and QV represent undeformed anddeformed triangle matrices, respectively. Using only thesmoothness term ES and the identity term EI ,

ES D

jtjXiD1

Xj2adj.i/

��Ti � Tj��2

F (4)

EI D

jtjXiD1

kTi � Ik2F (5)

we formulate the optimization problem as

mineV1���eVn

E D wSES C wIEI (6)

subject to yeViD yeVj

.i, j 2 Lh/

xeViD xeVj

.i, j 2 Lv/

where wS and wI are the user controlled weights, Lh and Lv

are horizontal and vertical lines, respectively, and yeViis y

coordinate of vertex i. We use weights wS D 1.0 and wI D

0.001 as in [16]. The optimization produces straightenedresults as shown in Figure 8(c) and (d). Now, one-repeattexture (Figure 8(e)) can be extracted by selecting the fourcorner points of the texture along the parallel straight lines.

Figure 9. The side and rear view of Figure 3(a).

Comp. Anim. Virtual Worlds 2015; 26:291–300 © 2015 John Wiley & Sons, Ltd. 297DOI: 10.1002/cav

Page 8: Garment capture from a photograph - SNUgraphics.snu.ac.kr/publications/journals/2015-Jeong.pdfdraping simulation on the 3D mannequin model. limit the scope to simple casual designs

Garment capture from a photograph M. H. Jeong, D. H. Han and H. S. Ko

Figure 10. The panels for the captured result shown Figure 2.

Figure 11. Draping captured garment on the avatar.

4.6. Generating the Draft and Panels

After we obtain the garment type and the PBSs, we createthe panels by supplying them to the parameterized draft-ing module. We map the one-repeat texture on the panels.Each garment type has the information on how to positionand create seams between the panels. After positioning andseaming panels, we perform the physically based clothingsimulation [17,18].

5. RESULTS

We implemented the proposed garment capture method ona 3.2 GHz Intel Core(TM) i7-960 processor with 8 GBmemory and a Nvidia GeForce GTX 560Ti video card. Weran the method to the left images of Figure 3. The right sideimages of Figure 3 show the results produced with Garm-Cap. For the physically based static simulation, we set themass density, stretching stiffness, bending stiffness, andfriction coefficient to 0.01g=cm2, 100kg=s2, 0.05kgcm2=s2,0.3, respectively, for the experiments shown in this paper.Running the proposed method took about 3 seconds pergarment excluding the static simulation.

Our experiments included three dresses (Figure 3(a)–(c)), two sweaters (Figure 3(d)–(e)), one shirt (Figure 3(f)),

one H-line skirt (Figure 3(g)), one A-line skirt (Figure 3(h)), and two pairs of pants (Figure 3(i)–(j)).

There can exist some discrepancies between capturedand real garments. We measured the discrepancies in thecorresponding PBSs of the captured and real garments. Forthe garments experimented in this paper, the discrepancywas bounded by 3cm.

The proposed method reproduces the shoulder strap(Figure 3(b)) and the necklines (Figure 3(a)–(f)) quite well.The method captures loose-fit garments (Figure 3(a)–(h))as well as normal-fit garments (Figure 3(b)) very success-fully. In capturing tight-fit garments, however, GarmCapmay not accurately represent the tightness of the garmentbecause the silhouette analysis cannot tell how much thegarment is stretched. Because of the earlier problem, forexample, some wrinkles are produced in the captured resultof Figure 3(g).

Intrinsically, the proposed method cannot capture theinput garment accurately when its draft does not exist inthe database. In Figure 3(h), whereas the skirt has pleatsat the bottom end, our method produces an A-line skirtbecause the pleated skirt is not in the database. In spiteof the missing pleats, we note that the results are visuallyquite similar.

298 Comp. Anim. Virtual Worlds 2015; 26:291–300 © 2015 John Wiley & Sons, Ltd.DOI: 10.1002/cav

Page 9: Garment capture from a photograph - SNUgraphics.snu.ac.kr/publications/journals/2015-Jeong.pdfdraping simulation on the 3D mannequin model. limit the scope to simple casual designs

M. H. Jeong, D. H. Han and H. S. Ko Garment capture from a photograph

Figure 9 shows the side and rear views of the virtualgarment shown in Figure 3(a). Although the method refer-enced only the frontal image, we note that the result is quiteplausible from other views. We attribute the success to thefact that GarmCap is based on the pattern drafting theory.

Figure 10 shows the panels that have been automaticallycreated for the captured garment in Figure 2. Figure 11shows a few results which are put on to the avatar.

6. CONCLUSION

In this work, we proposed a novel method GarmCap thatgenerates the virtual garment from a single photographof a real garment. We obtain the insight of the methodfrom the drafting of the garments in the pattern-makingstudy. GarmCap abstracted the drafting process into a com-puter module, which takes the garment type and PBSs toproduce the draft as the output. For identifying the gar-ment type, GarmCap matched the photographed garmentsilhouette with the selections in the database. The methodextracted the PBSs based on the distances between the gar-ment silhouette landmark points. GarmCap also extractedthe one-repeat texture in some limited cases based on thedeformation transfer technique.

The virtual garment captured from the input photographlooks quite similar to the real garment. The method didnot require any panel-flattening procedure, which con-tributed to obtaining realistic results. Although we createdthe virtual garment based on the front image, the result isplausible even when it is viewed from an arbitrary view.

The proposed method is based on the silhouette of thegarment. Therefore, the method is difficult to represent thenon-silhouette details of the garment such as wrinkles, col-lars, stitches, pleats and pockets. It would be challengingfor the method to represent complex dresses (includingtraditional costumes). In the future, we plan to investi-gate the methods for more comprehensive garment capturetechniques that can represent the earlier features.

ACKNOWLEDGEMENTS

This work was supported by the Basic Science ResearchProgram through the National Research Foundation ofKorea (NRF) funded by the Ministry of Education, Sci-ence and Technology (MEST) (No. NRF-2012R1A2A1A01004891), the Brain Korea 21 Plus Project in 2015,and ASRI (Automation and Systems Research Institute atSeoul National University).

REFERENCES

1. Armstrong HJ, Carpenter M, Sweigart M, Randock S,Venecia J. Patternmaking for Fashion Design. PearsonPrentice Hall: Upper Saddle River, N. J, 2006.

2. Turquin E, Cani M-P, Hughes JF. Sketching gar-ments for virtual characters. In Proceedings of the

First Eurographics Conference on Sketch-based Inter-faces and Modeling. SBM’04. Eurographics Associ-ation: Aire-la-Ville, Switzerland, Switzerland, 2004;175–182.

3. Decaudin P, Julius D, Wither J, Boissieux L,Sheffer A, Cani M-P. Virtual garments: a fully geomet-ric approach for clothing design. Computer GraphicsForum (Eurographics’06 Proceedings) 2006; 25(3):625–634.

4. Robson C, Maharik R, Sheffer A, Carr N. Smi 2011:full paper: context-aware garment modeling fromsketches. Computer Graphics 2011; 35(3): 604–613.

5. Tanie H, Yamane K, Nakamura Y. High marker densitymotion capture by retroreflective mesh suit. In Pro-ceedings of the 2005 IEEE International Conferenceon Robotics and Automation, 2005. ICRA, 2005;2884–2889.

6. Scholz V, Stich T, Keckeisen M, Wacker M, MagnorM. Garment motion capture using color-coded pat-terns. In ACM SIGGRAPH 2005 Sketches (SIGGRAPH’05), Buhler J (ed.). ACM: New York, NY, USA,Article 38, 2005; 439–448.

7. White R, Crane K, Forsyth DA. Capturing and ani-mating occluded cloth. ACM Transactions on Graphics2007; 26(3): 34–42.

8. Bradley D, Popa T, Sheffer A, Heidrich W, BoubekeurT. Markerless garment capture. ACM Trans. Graph.2008; 27(3): 99:1–99:9.

9. Vlasic D, Baran I, Matusik W, Popovic J. Articulatedmesh animation from multi-view silhouettes. ACMTransactions on Graphics 2008-08; 27(3): 97:1–97:9.

10. de Aguiar E, Stoll C, Theobalt C, Ahmed N,Seidel H-P, Thrun S. Performance capture from sparsemulti-view video. ACM Transactions on Graphics2008; 27(3): 98:1–98:10.

11. Stoll C, Gall J, de Aguiar E, Thrun S, Theobalt C.Video-based reconstruction of animatable human char-acters. ACM Transactions on Graphics 2010; 29(6):139:1–139:10.

12. Popa T, Zhou Q, Bradley D, Kraevoy V, Fu H, ShefferA, Heidrich Wolfgang. Wrinkling captured garmentsusing space-time data-driven deformation. ComputerGraphics Forum 2009; 28(2): 427–435.

13. Zhou B, Chen X, Fu Q, Guo K, Tan P. Garment mod-eling from a single image. Computing Graphics Forum2013; 32(7): 85–91.

14. Jeong M-H, Ko H-S. Draft-space warping: gradingof clothes based on parametrized draft. Journal ofVisualization and Computer Animation 2013; 24(3-4):377–386.

Comp. Anim. Virtual Worlds 2015; 26:291–300 © 2015 John Wiley & Sons, Ltd. 299DOI: 10.1002/cav

Page 10: Garment capture from a photograph - SNUgraphics.snu.ac.kr/publications/journals/2015-Jeong.pdfdraping simulation on the 3D mannequin model. limit the scope to simple casual designs

Garment capture from a photograph M. H. Jeong, D. H. Han and H. S. Ko

15. Rother C, Kolmogorov V, Blake A. "grabcut": inter-active foreground extraction using iterated graph cuts.ACM Transactions on Graphics 2004; 23(3): 309–314.

16. Sumner RW, Popovic J. Deformation transfer for tri-angle meshes. ACM Transactions on Graphics 2004;23(3): 399–405.

17. Baraff D, Witkin A. Large steps in cloth simula-tion. In Proceedings of the 25th Annual Conferenceon Computer Graphics and Interactive Techniques.SIGGRAPH ’98. ACM: New York, NY, USA, 1998;43–54.

18. Baraff D, Witkin A, Kass M. Untangling cloth. ACMTransactions on Graphics 2003; 22(3): 862–870.

AUTHORS’ BIOGRAPHIES

Moon-Hwan Jeong received his BSdegree in the School of Electrical Engi-neering from Ajou University in 2011and MS degree in the School of Electri-cal Engineering and Computer Sciencefrom Seoul National University, Korea,in 2013. Since 2011, he is in the doc-toral course in the School of Electrical

ngineering and Computer Science from SNU. His primaryresearch interests are in physically based animation,computer vision, and computational mathematics.

Dong-Hoon Han received his BSdegree in the Department of Electri-cal Engineering at POSTECH in 2013.Since 2013, he is in the graduate coursein the School of Electrical Engineer-ing and Computer Science from SNU.His primary research interests are in 3Dmodeling and image processing.

Hyeong-Seok Ko is a professor inthe Department of Electrical and Com-puter Engineering at Seoul NationalUniversity. He received BA and MSdegrees in Computer Science fromSeoul National University in 1981 and1985, respectively. He received a PhDdegree in Computer and Information

Science from the University of Pennsylvania. He is thedirector of Digital Clothing Center at the same uni-versity and has been developing physically based tech-niques for reproducing clothes, hair, deformable solids,and fluids.

300 Comp. Anim. Virtual Worlds 2015; 26:291–300 © 2015 John Wiley & Sons, Ltd.DOI: 10.1002/cav