Npar12 Pencil

Embed Size (px)

Citation preview

  • 8/13/2019 Npar12 Pencil

    1/9

    Combining Sketch and Tone for Pencil Drawing Production

    Cewu Lu Li Xu Jiaya JiaDepartment of Computer Science and Engineering

    The Chinese University of Hong Kong

    {cwlu, xuli, leojia }@cse.cuhk.edu.hk

    (a) input image (b) line drawing with strokes (c) tonal texture (d) output

    Figure 1: Given a natural image (shown in (a)), our method has two main steps to produce the stroke layer and tonal texture, as shownin (b)-(c). They naturally represent respectively shape and shading. Our nal result (d) is an algorithmic combination of these two maps,containing important structure information and mimicking a human drawing style.

    Abstract

    We propose a new system to produce pencil drawing from naturalimages. The results contain various natural strokes and patterns,

    and are structurally representative. They are accomplished by nov-elly combining the tone and stroke structures, which complementeach other in generating visually constrained results. Prior knowl-edge on pencil drawing is also incorporated, making the two basicfunctions robust against noise, strong texture, and signicant illu-mination variation. In light of edge, shadow, and shading informa-tion conveyance, our pencil drawing system establishes a style thatartists use to understand visual data and draw them. Meanwhile, itlets the results contain rich and well-ordered lines to vividly expressthe original scene.

    Keywords: pencil drawing, non-photorealistic rendering, tonaldrawing

    1 Introduction

    Pencil drawing is one of the most fundamental pictorial languagesin visual arts to abstract human perception of natural scenes. It es-tablishes the intimate link to artists visual record. Pencil drawingsynthesis methods generally fall into two categories: 2D image-based rendering and 3D model-based rendering. The majority of recent research along the line to mimic artistic drawing styles andproduce perceptually expressive results resorts to using 3D mod-els [Sousa and Buchanan 1999a; Lee et al. 2006]. With the popu-larity of digital cameras and internet sharing, obtaining high-qualitypictures is much easier than constructing 3D models of a scene.Consequently, there has been a substantial increase in the demandof rendering pencil drawing from natural images.

    In the literature, pencil drawing can be classied into a few styles.Sketch typically refers to a quickly nished work without a lot of details. Artists often use sketches to depict the global shape andmain contours. Hatching , on the other hand, is used to depict toneor shading by drawing dark and parallel strokes in different regions.

    While there has been work to produce line drawings [DeCarlo et al.2003; Judd et al. 2007; Lee et al. 2007; Grabli et al. 2010], hatching[Hertzmann and Zorin 2000; Praun et al. 2001], and a combinationof them [Lee et al. 2006], it is still difcult for many methods toproduce comparable results to those with 3D model guidance. It isbecause accurately extracting and manipulating structures, whichis almost effortless using 3D models with known boundary and ge-ometry, becomes challenging due to the existence of texture, noise,and illumination variation. For example, many existing approachessimulate strokes based on gradients and edges. However, edge de-tectors often produce segments mistakenly broken in the middle orcontaining spurious noise-like structures, unlike the strokes drawnby artists. Further, without surface normal or lighting informa-tion, automatic hatching generation becomes difcult. Adding di-rectional hatching patterns using closely placed parallel lines riskslaying them across different surfaces and depth layers, causing un-natural drawing effects.

    Based on the fact that the hatch and tone information is paramountin making automatic pencil drawing visually appealing, as shownin Fig. 1, and on the inherent difculty to properly make use of it as described above we novelly propose a two-stage systemwith important stroke and tone management and make the follow-ing contributions.

    Firstly, to capture the essential characteristics of pencil sketch andsimulate rapid nib movement in drawing, we put stroke generation

  • 8/13/2019 Npar12 Pencil

    2/9

    J T

    S G

    R

    Output

    I

    Sec.3.1

    Sec.3.2

    Pencil Texture

    Input

    Gradient

    Tone Map

    Line Drawing

    Figure 2: Overview of our pencil drawing framework.

    into a convolution framework, which differs our method from exist-ing approaches [Judd et al. 2007; Grabli et al. 2010]. Secondly, toavoid artifacts caused by hatching, we bring in tonal patterns con-sisting of dense pencil strokes without dominant directions. Para-metric histogram models are proposed to adjust the tone, leveragingthe statistics from a set of sketch examples. Finally, an exponentialmodel with global optimization is advocated to perform tone adjust-ment, notably beneting rendering in heavily textured regions andobject contours. The resulting pencil drawing combines sketchy

    strokes and tonal drawings, which well characterize and approxi-mate the common two-step human drawing process [Wang 2002].All these steps in our framework play fundamental roles in con-structing a simple and yet very effective drawing system with a sin-gle natural image as input.

    Our method can be directly applied as well to the luminance chan-nel of the color input to generate color pencil drawing. Effective-ness of the proposed method is borne out by extensive experimentson various types of natural inputs and comparison with other ap-proaches and commercial software.

    2 Related Work

    Sizable research was conducted to approximate artistic styles, suchas pen and ink illustration [Winkenbach and Salesin 1994; Winken-bach and Salesin 1996; Salisbury et al. 1997], graphite and coloredpencil drawing [Takagi et al. 1999], oil painting [Hertzmann 1998;Zeng et al. 2009], and watercolor [Curtis et al. 1997; Bousseau et al.2006]. We briey review prior work on pencil sketching, hatch-ing and line drawing, which are main components of pencil draw-ings. Other non-photorealistic rendering methods were reviewed in[Strothotte and Schlechtweg 2002].

    Model-based Sketching Sousa and Buchanan [1999a] simu-lated pencil drawing by studying the physical properties. Outlinesare drawn on every visible edge of the given 3D model followed bytexture mapping onto the polygon to represent shading. Interaction

    is required in this process. Lee et al. [2006] detected contours froma 3D surface and perturbed them to simulate human drawing. Penciltextures with directional strokes are generated to express shading. Itrelies on lighting and material in the 3D models. For line drawing,majority of the methods also based their input on 3D models [De-Carlo et al. 2003; Judd et al. 2007; Lee et al. 2007; Grabli et al.2010] and described how to extract crease and suggestive contoursto depict shape. A comprehensive review of line drawing from 3Dmodels is given in [Rusinkiewicz et al. 2005].

    For expression of shape and shading, in [Lake et al. 2000], pen-cil shading rendering was proposed. Different hatching styles weredened and analyzed in [Hertzmann and Zorin 2000]. Praun et al.[2001] achieved hatching in real time. Visually very compellingresults can be yielded as 3D models provide detailed geometry in-formation. It is however still unknown how to directly apply theseapproaches to image-based rendering where structural edges andillumination are generally unavailable.

    Image-based Sketching Sousa and Buchanan [1999b] devel-oped an interactive system for pencil drawing from images or draw-ing in other styles. In [Chen et al. 2004], portrait sketch was pro-posed with the style of a particular artistic tradition or of an artist.Durand et al. [2001] presented an interactive system, which cansimulate a class of styles including pencil drawing.

    Mao et al. [2001] detected local structure orientation from the inputimage and incorporated Linear Integral Convolution to produce theshading effect. Yamamoto et al. [2004] divided a source image intointensity layers in different ranges. The region-based LIC [Chenet al. 2008] alternatively considers superposition of edges, unsharpmasked image, and texture. Li and Huang [2003] used the featuregeometric attributes obtained from local-region moments and tex-tures to simulate pencil drawing. Another LIC-based method wasintroduced in [Gao et al. 2010].

    For line extraction, Kang et al. [2007] proposed computing an edgetangent ow (ETF) and used an iterative directional difference-of-Gaussian (DoG). In [Son et al. 2007], line extraction is based

  • 8/13/2019 Npar12 Pencil

    3/9

    (a) artist-created pencil sketch (b) close-ups

    Figure 3: Artist-produced pencil sketch.

    on likelihood-function estimation with the consideration of featurescales and line blurriness. Gaussian scale space theory was em-ployed in [Orzan et al. 2007] to compute a perceptually meaningfulhierarchy of structures. Zhao et al. [2008] adopted mathematicalmorphology thinning and formed stroke paths from pixels. Level-of-detail of lines is controlled in rendering. Image analogies [Hertz-mann et al. 2001] could also produce pencil sketch results from im-ages. But extra training image pairs are required. Finally, Xu et al.[2011] advocated an L0 smoothing lter for suppressing noise aswell as enhancing main structures. It is applicable to a few effortsincluding pencil stroke generation.

    Note previous work in general determines orientation of strokesand hatches based on local structures, which might be unstable forhighly textured or noisy regions. Winnemoeller et al. [Winnem olleret al. 2006] applied line extraction on bilaterally smoothed imagesto reduce noise. DeCarlo and Santella [DeCarlo and Santella 2002]adopted region simplication to eliminate superuous details.

    3 Our Approach

    Our image-based pencil sketching approach consists of two mainsteps, i.e., pencil stroke generation and pencil tone drawing. Theireffects complement each other. Specically, stroke drawing aims atexpressing general structures of the scene, while the tone drawingfocuses more on shapes, shadow, and shading than on the use of lines. The latter procedure is effective to perceptually depict globalillumination and accentuate shading and dark regions. The frame-work is illustrated in Fig. 2.

    It is noteworthy that there are various styles in pencil drawing. Ourframework generates one that includes fundamental and importantcomponents for natural and perceptually expressive representation.

    3.1 Line Drawing with Strokes

    Strokes are the vital elements in line drawing. Artists use strokeswith varying thickness, wiggliness, or brightness [Hsu and Lee1994; Curtis 1998]. An important observation is that artists cannotalways draw very long curves without any break. Strokes end of-ten at points of curvature and junctions. Consequently, there mightbe crosses at the junction of two lines in human drawing, causedby rapid hand movement. These are critical evidences for human-drawn strokes. Fig. 3(a) shows artist-created pencil sketch. In theclose-ups (b), broken strokes with cross-like effect exist at junc-tions.

    Based on this fact, unlike previous methods [Lee et al. 2006; Juddet al. 2007; Grabli et al. 2010] using continuous functions, e.g., Si-nusoid function, to bend strokes, we take another approach to put a

    * * **

    Figure 4: Line drawing with strokes.

    set of less lengthy edges to approximate sketchy lines. Intuitively,when drawing strokes at a point, we determine the direction, length,and width in a pixel classication and link process based on a uni-ed convolution framework.

    Classication We compute gradients on the grayscale version of the input, yielding magnitude

    G = ( x I )2 + ( y I )212 , (1)

    where I is the grayscale input. x and y are gradient operators intwo directions, implemented by forward difference. We note thatthe gradient maps for natural images are typically noisy and do notcontain continuous edges immediately ready for stroke generation.

    One issue of generating short lines for sketch is the estimation of line direction for each pixel. Naive classication to this end uses thegradient direction, which is sensitive to noise and thus is unstable.We propose a more robust strategy using local information.

    In the rst place, we choose eight reference directions at 45 de-grees apart and denote the line segments as {L i}, i {1 . . .8}. Theresponse map for a certain direction is computed as

    G i = L iG, (2)

    whereL

    i is a line segment at the ith

    direction, which is deemed asa convolution kernel. The length of L i is set to 1 / 30 of the imageheight or width, empirically. is the convolution operator, whichgroups gradient magnitudes along direction i to form the lter re-sponse map Gi . The classication is performed by selecting themaximum value among the responses in all directions and is writ-ten as

    C i( p) = G( p) if argmin i{G i( p)} = i0 otherwise (3)

    where p indexes pixels and C i is the magnitude map for direction i.Fig. 4 shows the C maps. Their relationship with G is expressed as8i= 1 C i = G and is illustrated in the top row of Fig. 4. We note thisclassication strategy is very robust against different types of noise.A pixel can be correctly classied into a group C given the supportof neighboring gradients and structures. It does no matter whetherthe gradient of the current pixel is noise contaminated or not.

    Line Shaping Given the map set {C i}, we generate lines at eachpixel also by convolution. It is expressed asS =

    8

    i= 1

    (L iC i).

    Convolution aggregates nearby pixels along direction L i , whichlinks edge pixels that are even not connected in the original gra-dient map. This process is also very robust to noise and other im-age visual artifacts. The nal pencil stroke map S is obtained by

  • 8/13/2019 Npar12 Pencil

    4/9

    (a) (b) (c)

    Figure 6: Pencil sketch line generation. (a) Input color image. (c)Output S. Close-ups are shown in (b).

    (a) input image (b) gradient map (c) our result

    Figure 7: Our method can handle texture very well. It does not

    produce noise-like texture patterns contained in the gradient map.

    inverting pixel values and mapping them to [0, 1]. This process isdemonstrated in Fig. 4. An automatically generated stroke map S is shown in Fig. 6. The close-ups indicate that our method captureswell the characteristics of hand-drawn sketchy lines shown in Fig.3.

    Discussion Compared with other line drawing methods, our ap-proach contributes in generating strokes taking special considera-tion of edge shape and neighborhood pixel support, capturing thesketch nature.

    Using lines to simulate sketch is surprisingly effective. Theconvolution step extends the ends of two lines at the junctionpoint. Note that only long straight lines in the original edgemap would be signicantly extended as directional convolu-tion only aggregates pixels along strictly straight lines.

    Pixels at the center of a long line would receive pixel valuesfrom both sides, which make the line center darker than theends, desirable in pencil drawing.

    Our strategy helps link pixels that are not necessarily con-nected in the original edge map when nearby pixels are mostlyaligned along the straight line, imitating human line drawingprocess and resisting noise.

    Fig. 5 shows one comparison with the method of Son et al. [2007],where strokes are generated based on local gradients. Our result,shown in (c), demonstrates a different style more like human pencildrawing.

    Another notable benet of the proposed stroke generation is itsstrong ability to handle texture. Natural images contain many tex-tured surfaces with ne details, such as leaves and grass. Theirgradients are usually noisy, as shown in Fig. 7(b). It is difcult toconnect these noise-like texture into continuous lines. Also it is notpractical to nd one dominant direction to generate strokes usingthe method of Son et al. [2007]. Our result in Fig. 7(c), thanksto the classication and link process, contains reliable strokes fromthe originally noisy gradient eld.

    (a) input natural image (b) pencil sketch by an artist

    (c) histogram of (a) (d) histogram of (b)

    Figure 8: Tone distributions in a natural image and in a pencilsketch are quite different. Although they are not about the samescene, similar edge structures can be observed.

    3.2 Tone Drawing

    Artists also use dense strokes, such as hatching, to emphasizeshadow, shading, and dark objects. Besides line strokes, our sys-tem is also equipped with a new scheme to render pencil texture

    Tone Map Generation We rst determine the tone value for eachpixel leveraging information in the original grayscale input. Pre-vious methods [Li and Huang 2003; Yamamoto et al. 2004] usedimage tones to generate hatching patterns. We found this process,for many cases, is not optimal because the tone distribution of agrayscale image generally differs signicantly from that of pencilsketch. One example is shown in Fig. 8(a)-(b). The image tonethus should not be directly used and further manipulation is criti-cally needed.

    We propose a parametric model to t tone distributions of pencilsketch. Unlike the highly variable tones in natural images, a sketchtone histogram usually follows certain patterns. It is because pencildrawings are the results of interaction of paper and graphite, whichmainly consist of two basic tones. For very bright regions, artists donot draw anything to show white paper. Heavy strokes, contrarily,are used to accentuate boundaries and highlight dark regions. Inbetween these two tones, mild tone strokes are produced to enrichthe layer information.

    Model-based Tone Transfer With the above ndings, we pro-pose a parametric model to represent the target tone distribution,written as

    p(v) = 1 Z

    3

    i= 1 i pi(v), (4)

    where v is the tone value and p(v) gives the probability that a pixelin a pencil drawing is with value v. Z is the normalization factor tomake

    10 p(v)dv = 1. The three components pi(v) account for the

    three tonal layers in pencil drawing, and s are the weights coarselycorresponding to the number of pixels in each tonal layer. We scalethe values into the range [0, 1] to cancel out illumination differencein computing the distributions.

    Fig. 9 shows the layers and respective tone distributions. Givenan artist-drawn pencil sketch shown in (a), we partition pixels intothree layer, according to their values (details are given later when

  • 8/13/2019 Npar12 Pencil

    5/9

    (a) input image (b) [Son et al. 2007] (c) our result

    Figure 5: Comparison of stroke generation.

    discussing parameter learning). They are highlighted in green, or-ange, and blue in (b). (c) gives the tone distributions of the threelayers, in accordance with our observation. Specically, the brightand dark layers have obvious peaks while the mild tone layer doesnot correspond to a unique peak due to the nature of contact be-tween black graphite and white papers.

    Consequently, a major difference between natural images and pen-cil drawings is that the latter consists of more brighter regions,which is the color of the paper. To model the tone distribution,we use the Laplacian distribution with its peak at the brightest value(255 in our experiments) because pixels concentrate at the peak andnumber decreases sharply. The small variation is typically causedby the using of eraser or slight illumination variation. We thus de-ne

    p1(v) = 1 b e

    1v b if v 10 otherwise

    (5)

    where b is the scale of the distribution.

    Unlike the bright layer, the mild tone layer does not necessarily

    peak at a specic gray level. Artists usually use many strokes withdifferent pressure to express depth and different levels of details.We simply use the uniform distribution and encourage full utiliza-tion of different gray levels to enrich pencil drawing. It is expressedas

    p2 (v) = 1ubua if ua v ub0 otherwise (6)

    where ua and ub are two controlling parameters dening the rangeof the distribution.

    Finally, dark strokes emphasize depth change and geometric con-tours of objects. We model them as

    p3 (v) = 1 2 d e(v d )2

    2 2d , (7)

    where d is the mean value of the dark strokes and d is the scaleparameter. The variation of pixel values in the dark layer is in gen-eral much larger than that in the bright layer.

    Parameter Learning The controlling parameters in Eqs. (5)-(7)actually determine the shape of the tone histogram. For differentpencil drawings, they may differ largely. It is also why the extractededges do not simply give a pencil-sketch feeling. In practice, wecollect a number of pencil sketches with similar drawing styles tot the parameters. Examples are in Fig. 10. Using parametric

    (a) (b) (c)

    Figure 9: Illustration of composition of three tone layers and their value distributions taking an artist-drawn pencil sketch as an ex-ample. The green, orange, and blue pixels (and plots) belong to thedark, mild-tone, and bright layers.

    Figure 10: Four examples and their corresponding tone distribu-tions.

    models is greatly advantageous in resisting different types of noiseand visual artifacts.

    Initially, the grayscale input I is slightly Gaussian smoothed. Thenthe parameter estimation process follows, in which the bright anddark tone layers are manually indicated with threshold of intensitiesand the remaining pixels are in the mild-tone layer. The number of pixels in each layer decides weights . For each layer, the param-eters are estimated using Maximum Likelihood Estimation (MLE).Denoting the mean and standard deviation of the pixel values as mand s in each layer, the parameters have the closed-form represen-tation, written as

    b = 1 N

    N

    i= 1 | xi 1|, ua = mm

    3sm, ub = mm + 3sm, d = md , d = sd ,

    where xi is the pixel value and N is the number of pixels in the layer.

  • 8/13/2019 Npar12 Pencil

    6/9

    1 2 3 b ua ub d d 11 37 52 9 105 225 90 11

    Table 1: Learned parameters.

    (a) (b) (c)

    Figure 11: Three results with (a) 1 : 2 : 3 = 11 : 37 : 52 , (b) 1 : 2 : 3 = 29 : 29 : 42 , and (c) 1 : 2 : 3 = 2 : 22 : 76. Thecorresponding value distributions are plotted in the second row.

    The nally estimated parameters are listed in Table 1. Based on theparametric p1 , p2 , and p3 , for each new input image, we adjust thetone maps using simple histogram matching in all the three layersand superpose them again. We denote by J the nally tone adjustedimage of input I . Note that the weights 1 , 2 , and 3 in Table 1can vary to produce results in dark and light styles. One example isshown in Fig. 11.

    Pencil Texture Rendering Generating suitable pencil texturesfor images is difcult. Tonal texture refers to pencil patterns with-out obvious direction, which reveal only the tone information. Weutilize the computed tone maps in our framework and learn human-drawn tonal patterns, as illustrated in Fig. 12.

    We collect a total of 20 tonal textures from examples in rendering.One input image only needs one pattern. In human drawing, tonalpencil texture is generated by repeatedly drawing at the same place.We simulate the process using multiplication of strokes, which re-sults in an exponential combination H ( x) ( x) J ( x) (or in the log-arithm domain ( x) ln H ( x) ln J ( x)), corresponding to drawingpattern H times to approximate the local tone in J . Large woulddarken more the image, as shown in Fig. 12. We also require tobe locally smooth. It is thus computed by solving

    = argmin

    ln H ln J 22 + 22 , (8)where is the weight with value 0 .2 in our experiments. Eq. (8)

    transforms to a standard linear equation, which can be solved usingconjugate gradient. The nal pencil texture map T is computedthrough an exponential operation

    T = H . (9)

    Fig. 13 shows a comparison with the LIC-based method [Ya-mamoto et al. 2004], which uses directional hatching patterns.Ours, contrarily, has tonal textures.

    3.3 Overall Framework

    We combine the pencil stroke S and tonal texture T by multiplyingthe stroke and texture values for each pixel to accentuate important

    0.20.6

    1.01.4 1.8

    Figure 12: Human drawn tonal pattern p. It does not have a dom-inant direction. Varying changes density in our method.

    (a) input image (b) LIC [2004] (c) ours

    Figure 13: Result comparison with the LIC method.

    contours, expressed as

    R = S T . (10)Note that the tonal texture is instrumental to make the drawing per-ceptually acceptable, by adding shading to originally textureless re-gions. One example is shown in Fig. 1.

    3.4 Color Pencil Drawing

    We extend our method to color pencil drawing by taking the gen-erated grayscale pencil sketch R as the brightness layer, i.e., the Ychannel in the YUV color space, and re-maping YUV back to thergb space. Figs. 1 and 14 show two examples. As our tone ad- justment is effective to produce pencil drawing like appearance, thenal color drawing also presents proper tones.

    4 Results and Comparison

    We compare our results with those produced by several representa-tive image-based pencil sketching approaches and commercial soft-ware. Fig. 15 gives the comparison with the method of Sousa andBuchanan [1999b]. The input image (a) and result (b) are providedin the original paper. Our result, shown in (c), presents a differentdrawing style not only the contours are highlighted by sketchy

    (a) input image (b) color pencil sketch

    Figure 14: Color pencil drawing example.

  • 8/13/2019 Npar12 Pencil

    7/9

    (a) input image (b) [1999b] (c) our method

    Figure 15: Comparison with Sousa and Buchanan [1999b].

    (a) input image (b) S

    (c) [Li and Huang 2003] (d) our nal R

    Figure 16: Comparison with an LIC-based approach.

    lines, but also the illumination and shadow are well preserved byour tonal map.

    We also compare in Fig. 16 our method with a LIC-based ap-proach [Li and Huang 2003]. Our stroke map and the output are

    respectively shown in (b) and (d). We note that determining hatch-ing direction based on local gradients is not always reliable, con-sidering the ubiquity of highly textured regions in natural images.

    The method presented in [Sousa and Buchanan 1999b] can generateresults from drawing in other styles. We also apply our systemto pencil drawing produced by the method of [Lee et al. 2006] togenerate a result of a different style. We smooth the pencil drawingin (a) using bilateral ltering to get a grayscale image shown in (b),which is taken as the input of our system. Our nal result is shownin (c). The close-ups show difference in strokes and hatching.

    Fig. 18 shows a comparison with drawing produced by commercialsoftware. (b) and (c) are results generated by PhototoSketch v3.5

    (a) [Lee et al. 2006] (b) BLF result of (a) (c) our result

    Figure 17: (a) is a result presented in [Lee et al. 2006]. (b) isthe the bilateral ltering result of (a). (c) is our result taking (b) asinput.

    [Photo to Sketch Inc. 2007] and the sketch lter of Adobe Photo-shop CS5 [Adobe System Inc. 2011], respectively. Our result isshown in Fig. 18(d), which not only contains more detailed strokes

    but is visually appealing as well.Finally, we show in Figs. 19 and 20 more results. Currently, therunning time of our un-optimized Matlab implementation is about2s for a 600 600 image on a PC with an Intel i3 CPU.

    5 Concluding Remarks

    Pencil drawing from natural images is an inherently difcult prob-lem because not only structural information should be selectivelypreserved but as well the appearance should approximate that of real drawing using pencils. Thus a general algorithm that works formany types of natural images and can produce visually compellingresults is hard to develop. Without 3D models, complex textureand spatially varying illumination further complicate the process.Directly generating strokes and hatching based on local gradientsgenerally does not work well, due preliminary to variation of noise,texture and region boundaries. We address these issues and proposea two-stage system combining both line and tonal drawing. Themain contribution includes a novel sketchy line generation schemeand a tonal pencil texture rendering step. They are effective androbust.

    Limitation and Future Work As global pencil texture patternsare adopted in tonal image production, our method cannot be di-rectly applied to video sequences. This is because the pencil pat-terns are nearly static among frames, inconsistent with possiblymoving foreground and background layers. Our future work in-cludes extending this framework to video sequences with temporalconstraints and implementing the system on mobile devices.

    6 Acknowledgments

    The work described in the paper is supported by a grant from theResearch Grants Council of the Hong Kong Special AdministrativeRegion (Project No. 413110).

    References

    A DOBE SYSTEM INC ., 2011. Photoshop CS5. http://www.adobe.com/products/photoshop.html .

  • 8/13/2019 Npar12 Pencil

    8/9

    (a) input (b) PhototoSketch v3.5 (c) Photoshop CS5 (d) our result

    Figure 18: Comparison with commercial software.

    B OUSSEAU , A., K APLAN , M., T HOLLOT , J., AND S ILLION , F. X.2006. Interactive watercolor rendering with temporal coherenceand abstraction. In NPAR , 141149.

    C HEN , H . , L IU , Z . , ROSE , C . , X U , Y. , S HUM , H.-Y., ANDS ALESIN , D . 2004. Example-based composite sketching of hu-man portraits. In NPAR , 95153.

    C HEN , Z., Z HOU , J., G AO , X., L I , L., AND L IU , J. 2008. A novelmethod for pencil drawing generation in non-photo-realistic ren-dering. In PCM , 931934.

    C URTIS , C. J . , A NDERSON , S. E. , S EIMS , J . E. , F LEISCHER ,K. W., AND SALESIN , D. 1997. Computer-generated water-color. In SIGGRAPH , 421430.

    C URTIS , C. J . 1998. Loose and sketchy animation. In ACM SIG-GRAPH 98 Electronic art and animation catalog .

    D E C ARLO , D., AND SANTELLA , A. 2002. Stylization and ab-straction of photographs. ACM Transactions on Graphics 21 , 3,769776.

    D E C ARLO , D . , F INKELSTEIN , A . , R USINKIEWICZ , S . , ANDS ANTELLA , A. 2003. Suggestive contours for conveying shape. ACM Transactions on Graphics 22 , 3, 848855.

    D URAND , F., O STROMOUKHOV , V., M ILLER , M., D URANLEAU ,F., AND DORSEY , J. 2001. Decoupling strokes and high-levelattributes for interactive traditional drawing. In Rendering Tech-niques , 7182.

    G AO , X., Z HOU , J., C HEN , Z., A ND C HEN , Y. 2010. Automaticgeneration of pencil sketch for 2d images. In ICASSP , 10181021.

    G RABLI , S. , T URQUIN , E. , D URAND , F., AND SILLION , F. X.2010. Programmable rendering of line drawing from 3d scenes. ACM Transactions on Graphics 29 , 2.

    H ERTZMANN , A., A ND Z ORIN , D. 2000. Illustrating smooth sur-faces. In SIGGRAPH , 517526.

    H ERTZMANN , A., JACOBS , C., O LIVER , N., C URLESS , B., ANDS ALESIN , D. 2001. Image analogies. In Computer graphics and interactive techniques , ACM, 327340.

    H ERTZMANN , A . 1998. Painterly rendering with curved brushstrokes of multiple sizes. In SIGGRAPH , 453460.

    H SU , S., A ND L EE , I. 1994. Drawing and animation using skeletalstrokes. In Computer graphics and interactive techniques , 109118.

    JUDD , T., D URAND , F., AND ADELSON , E. H. 2007. Apparentridges for line drawing. ACM Transactions on Graphics 26 , 3.

    K ANG , H., L EE , S., A ND C HUI , C. K. 2007. Coherent line draw-ing. In NPAR , 4350.

    L AKE , A., M ARSHALL , C., H ARRIS , M., A ND B LACKSTEIN , M.2000. Stylized rendering techniques for scalable real-time 3danimation. In NPAR , 1320.

    L EE , H., K WON , S., A ND L EE , S. 2006. Real-time pencil render-ing. In NPAR , 3745.

    L EE , Y., M ARKOSIAN , L., L EE , S., AND HUGHES , J. F. 2007.Line drawings via abstracted shading. ACM Transactions onGraphics 26 , 3.

    L I , N., AND HUANG , Z. 2003. A feature-based pencil drawingmethod. In GRAPHITE , 135140.

    M AO , X., N AGASAKA , Y., AND I MAMIYA , A. 2001. Automaticgeneration of pencil drawing from 2d images using line integralconvolution. In Proc. CAD/Graphics , 240248.

    O RZAN , A., B OUSSEAU , A., B ARLA , P., AN D T HOLLOT , J. 2007.Structure-preserving manipulation of photographs. In NPAR ,103110.

    P HOTO TO SKETCH INC ., 2007. PhototoSketch 3.5. http://download.cnet.com/Photo-to-Sketch/ .

    P RAUN , E . , H OPPE , H . , W EB B , M., AND FINKELSTEIN , A.2001. Real-time hatching. In SIGGRAPH , 581.

    RUSINKIEWICZ , S., D EC ARLO , D., AND F INKELSTEIN , A. 2005.Line drawings from 3d models. In ACM SIGGRAPH 2005Courses .

    S ALISBURY , M., W ONG , M. T., H UGHES , J. F., AND SALESIN ,D. 1997. Orientable textures for image-based pen-and-ink illus-tration. In SIGGRAPH , 401406.

    S ON , M., K ANG , H., L EE , Y., AND L EE , S. 2007. Abstract linedrawings from 2d images. In Pacic Conference on Computer Graphics and Applications , 333342.

  • 8/13/2019 Npar12 Pencil

    9/9

    (a) input (b) our result (c) our result

    Figure 19: More grayscale pencil drawing results.

    (c) input (b) our grayscale result (c) Our color pencil drawing result

    Figure 20: More grayscale and color pencil drawing results.

    S OUSA , M. C . , AND BUCHANAN , J. W. 1999. Computer-generated graphite pencil rendering of 3d polygonal models.Comput. Graph. Forum 18 , 3, 195208.

    S OUSA , M. C., AND BUCHANAN , J. W. 1999. Observationalmodel of blenders and erasers in computer-generated pencil ren-dering. In Graphics Interface , 157166.

    S TROTHOTTE , T. , AND SCHLECHTWEG , S . 2002. Non- photorealistic computer graphics: modeling, rendering, and an-

    imation . Morgan Kaufmann Publishers Inc.

    TAKAGI , S., N AKAJIMA , M., A ND F UJISHIRO , I. 1999. Volumet-ric modeling of colored pencil drawing. In Pacic Conferenceon Computer Graphics and Applications , 250258.

    WANG , T. C. 2002. Pencil sketching . John Wiley & Sons, Inc.

    W INKENBACH , G., A ND S ALESIN , D. 1994. Computer-generatedpen-and-ink illustration. In SIGGRAPH , 91100.

    W INKENBACH , G., A ND S ALESIN , D. 1996. Rendering paramet-ric surfaces in pen and ink. In SIGGRAPH , 469476.

    W INNEM OLLER , H. , O LSEN , S. C., A ND G OOCH , B. 2006. Real-time video abstraction. ACM Transactions on Graphics 25 , 3,12211226.

    X U , L., L U , C., X U , Y., AND J IA , J. 2011. Image smoothing vial0 gradient minimization. ACM Transactions on Graphics 30 , 6.

    YAMAMOTO , S., M AO , X., AND IMAMIYA , A. 2004. Enhancedlic pencil lter. In CGIV , 251256.

    Z ENG , K., Z HAO , M., X IONG , C., AND Z HU , S. C. 2009. Fromimage parsing to painterly rendering. ACM Transactions onGraphics 29 , 1.

    Z HAO , J., L I , X., AND C HONG , F. 2008. Abstract line drawingsfrom 2d images based on thinning. In Image and Signal Process-ing , 466470.