Lazy Solid Texture Syntehsis
Eurographics Symposium on Rendering 2008 Yue Dong, Sylvain Lefebvre,Xin Tong, George Drettakis
We introduce a new algorithm with the unique ability to restrict synthesis to a subset of the voxels, while enforcing spatial determinism◦ Only a thick layer around the surface needs to be
synthesized
Synthesize a volume from a set of pre-com-puted 3D candidates◦ Carefully select in a pre-process only those candi-
dates forming consistent triples
Abstract
Runs efficiently on the GPU
◦ Generates high resolution solid textures on sur-faces within seconds
◦ Memory usage and synthesis time only depend on the output textured surface area
◦ Our method rapidly synthesizes new textures for the surfaces appearing when interactively break-ing or cutting objects
Abstract
Solid textures define the texture content di-rectly in 3D◦ Removes the need of a planar parameterization◦ Unique feeling that the object has been carved
out of a block of matter
Introduction
Implicit Volume
◦ Color = f(x, y, z)
◦ Procedural texturing Texturing and Modeling: A Procedural
Approach EBERT D., MUSGRAVE K., PEACHEY D.,
PERLIN K., WORLEY Academic Press, 1994
◦ Spectral analysis Spectral analysis for automatic 3d
texture generation GHAZANFARPOUR D., DISCHLER J.-M. Computers & Graphics, 1995
Generation of 3d texture using mul-tiple 2d models analysis GHAZANFARPOUR D., DISCHLER J.-M. Computers & Graphics,1996
Low memory us-age
Limited range of materials
Previous Work
Explicit Volume
◦ Color = g[x, y, z]
◦ Histogram matching Pyramid-Based texture analysis/synthesis
HEEGER D. J., BERGEN J. R. SIGGRAPH, 1995
◦ Stereological technique Stereological techniques for solid textures
JAGNOW R., DORSEY J., RUSHMEIER H. SIGGRAPH, 2004
◦ Neighborhood matching Texture synthesis by fixed neighborhood search-
ing WEI L.-Y. PhD thesis, 2002, Stanford University
Aura 3d textures QIN X., YANG Y.-H. IEEE Transactions on Visualization and Computer
Graphics, 2007 Solid texture synthesis from 2d exemplars
KOPF J., FU C.-W., COHEN-OR D., DEUSSEN O., LISCHINSKI D., WONG T.-T. SIGGRAPH, 2007
Good Quality Can synthesis
various materials Take long time to
compute
Previous Work
Pre-computation◦ 3D candidates from 2D exemplars
Multi-resolution pyramid synthesis◦ Upsample◦ Jitter◦ Correction
Process Overview
Pixel : 2D / Voxel : 3D Triple : a set of three 2D coordinates Crossbar : a set of pixels which are crossing
in three neighborhoods of size N (N = 5)
Terminology
We select candidate triples following two important properties
◦ A good triple must have matching colors along the crossbar of the three neighborhood To provide color consistency
◦ A good triple must have a good coherence across all three exemplars Which is likely to form coherent patches with other
neighboring candidates
3D candidates selection
A suitable candidate should be consistent across the crossbar◦ Minimize the color difference of the crossbar◦ Compute L2 color difference between each pairs◦ The sum of difference for the three pairs defines a
crossbar error CB
Color consistency
In each pixel of each exemplar◦ Form triples using the pixel itself and two neigh-
borhoods from the other two exemplars◦ Select the triples producing the smallest crossbar
error
To speed up the process◦ Extract S most-similar pixel strips from each of
the two exemplars, using ANN library◦ Form S2 triples then take 100 best triples◦ S = 65
Color consistency
Check whether a candidate may form co-herent patches in all directions with candi-dates from neighboring pixels
For each coordinate within a candidate triple◦ Verify that at least one candidate from a neighbor-
ing pixel has a continuous coordinate
Triples of coherent candi-dates
Triples of coherent candi-dates
p p+1
ExEyEz
xC – Candidates for Ex xCk[p] – k-th candidate triple for pixel p in Ex xCk[p]y – Ey coordinate of the triple xCk[p]
Triples of coherent candi-dates
Iterate until having no more than 12 candi-dates per pixel◦ Typically requires 2 iterations
If more candidates remain select first 12 with the smallest crossbar error
It is possible to have no candidate at all◦ Rare in practice
Triples of coherent candi-dates
Candidates are not only useful for neighborhood matching, but also provide a very good initialization for the synthesis process
For each pixel◦ One 2D neighborhood is in the plane of the exemplar◦ Two others are orthogonal to it and intersect along a line
of neighborhood size of N voxels
Candidate Slab
To initialize synthesis we create such a slab using the best (first) candidate at each pixel
Using the slab directly as a 3D exemplar would be very limiting◦ This would ignore all other candidates◦ Uses a slab only for initialization
Candidate Slab
Extended ‘Parallel Controllable Texture Syn-thesis’ [SIGGRAPH 2005]
Same overall structure
◦ Upsample◦ Jitter◦ Correction
Parallel Solid Synthesis
Contrary to the original scheme we perturb the result through jitter only once, after ini-tialization◦ If finer control is desired, jitter could be explicitly
added after each upsampling step
Parallel Solid Synthesis
To reduce synthesis time, multi-resolution syn-thesis algorithms can start from an intermedi-ate level of the image pyramid
A good initialization is key to achieve high-qual-ity synthesis
We simply choose one of the candidate slabs and tile it in the volume◦ Three levels above the finest (Maximum Level L – 3)◦ Using the candidate slab from the corresponding level
Initialization
Candidate Slab
Random Initialization Slab Initialization
To explicitly introduce variations to generate variety in the result
Perturb the initial result by applying a con-tinuous deformation, similar to a random warp
Jitter
J – Jittered Volume v – Voxel coordinate ci – Random point in space di – Nomalized Random direction G = 200 Ai = 0.1 ~ 0.3 σi = 0.01 ~ 0.05
Jitter
It is important for Ai to have larger magni-tude with smaller σi◦ Adds stronger perturbation at small scales,
while adding subtle distortions to coarser scales◦ Small scale distortions are corrected by synthesis,
introducing variety
The overall magnitude of the jitter is di-rectly controllable by the user
Jitter
Each of the eight child volume cells inherits three coordinates from its parent, one for each direction
Upsample
Performed on all synthesized voxels simul-taneously, in parallel
We compute a color by averaging the corre-sponding three colors from the exemplars
We visit each of its direct neighbors, and use the stored coordinate triples to gather the candidate sets
Correction
Px – 3 x 2 matrix transforming a 2D offset from Ex to volume space
Candidate Set
Search for the best matching candidate by distance be-tween voxel neighborhood and 3D candidate
Distance is measured by L2 norm on color differences◦ Can use PCA projection to speed up the process
Replace the triple with best matching candidate◦ triples have been pre-computed and optimized to guarantee that
the color disparity between the three colors in each voxel is low
Two correction pass for every level◦ Using sub-pass mechanism of PCTS◦ 8 pass sub-pass
Correction
We gather 12 candidates from the 33 = 27 direct neighbors, for a total of 324 candidates per voxel◦ Too many candidates
Search for the best matching 2D candidates in each of the three directions then gather the 3D candidates only from these three best matching pixels◦ Still a lot◦ In practice we keep 4 2D and 12 3D candidates per exemplar
pixel at coarse levels 27×4 = 108 2D candidates 3×12 = 36 3D candidates
◦ 2 2D and 4 3D candidates at the finest level
Correction
Determine the entire dependency chain throughout the volume pyramid from a requested set of voxels to synthesize the smallest number of voxels◦ Compute a synthesis mask
Masklp⊗Neighborhood Shape - dilation of the mask by
the shape of the neighborhoods
Lazy Subset Synthesis
To compute a single voxel, with N = 5, 2 passes and synthesis of the 3 last levels, our scheme requires a dependency chain of 6778 voxels◦ The size of the dependency chain grows quadrati-
cally with the number of passes
Lazy Subset Synthesis
Entirely in software and using the GPU to accelerate the actual synthesis
Intel Core2 6400 (2.13GHz) CPU and an NVIDIA GeForce 8800 Ultra
We sometimes add feature distance
Implementation and Re-sults
Most results in the paper are computed from a single example image repeated three times◦ Pre-computed candidates may be shared de-
pending on the orientation chosen for the image◦ Typically 7 seconds for 642 exemplars◦ 25 to 35 seconds for 1282 exemplars◦ Includes building the exemplar pyramids, com-
puting the PCA bases and building the candi-date sets
◦ 231KB memory required for a 642 exemplar
Candidate pre-computation
Implemented in fragment shaders, using the OpenGL Shading Language◦ Unfold volumes in tiled 2D textures, using three 2-
channel 16 bit render targets to store the synthesized triples
◦ Pre-compute and reduce the dimensionality of all can-didate 3-neighborhoods using PCA, keeping between 12 and 8 dimensions Keep more terms at coarser levels since less variance is
captured by the first dimensions◦ Quantize the neighborhoods to 8-bits to reduce band-
width Stored in RGBA 8 bit textures
GPU implementation
In order to minimize memory consumption, we per-form synthesis into a TileTree data structure◦ LEFEBVRE S., DACHSBACHER C. In Proceedings of the ACM
SIGGRAPH Symposium on Interactive 3D◦ Graphics and Games (2007)
GPU implementation
When interactively cutting an object, syn-thesis occurs only once for the newly ap-pearing surfaces◦ Tile-Tree cannot be updated interactively
Store the result in a 2D texture map for display◦ Our implementation only allows planar cuts
new surfaces are planar and are trivially parameter-ized onto the 2D texture synthesized when the cut occurs
GPU implementation
Full volume synthesis and com-parisons
Full volume synthesis and com-parisons
Full volume synthesis and com-parisons 7.22 seconds for synthesizing the 643 volume
from 642 exemplar◦ 7 seconds for pre-computation and 220 milliseconds
for synthesis◦ Memory requirement during synthesis is 3.5MB
28.7 seconds for synthesizing the 1283 volume from 1282 exemplar◦ 27 seconds for pre-computation and 1.7 seconds for
synthesis◦ ‘Solid texture synthesis from 2d exemplars’ [SIGGRAPH
2007] takes 10 to 90 minutes
Solid synthesis on surfaces
Solid synthesis on surfaces 4.1 seconds (dragon) to 17 seconds (complex
structure), excluding pre-computation
Storage of the texture data requires between 17.1MB (statue) and 54MB (complex structure)◦ The equivalent volume resolution is 10243 which
would require 3GB
Slower than state-of-the-art pure surface tex-ture synthesis approaches◦ But inherits all the properties of solid texturing
Solid synthesis on surfaces On demand synthesis when cutting or
breaking objects (Fig. 10)◦ Resolution of 2563
Initially requires 1.3MB◦ The average time for synthesizing a 2562 texture
for a new cut is 8 ms◦ Synthesizing a 2562 slice of texture content re-
quires 14.4MB Due to the necessary padding to ensure spatial de-
terminism
Comparison with a simple tiling
Comparison with a method using standard 2D candidates
Comparison with a method using standard 2D candidates
We also implemented our synthesis algo-rithm using only standard 2D candidates◦ Takes roughly twice the number of iterations to
obtain a result of equivalent visual quality◦ Due to the increased number of iterations, the
size of the dependency chain for computing a sin-gle voxel grows from 6778 voxels with 3D candi-dates to 76812 voxels with 2D candidates A factor of 11.3 in both memory usage and speed
Limitations
A new algorithm for solid synthesis
◦ with the unique ability to restrict synthesis to a subset of the voxels, while enforcing spatial de-terminism
◦ Synthesize a volume from a set of pre-computed 3D candidates
◦ GPU implementation is fast enough to provide on demand synthesis when interactively cutting or breaking objects
Conclusion