13
1 Direct-to-Indirect Acoustic Radiance Transfer Lakulish Antani, Anish Chandak, Micah Taylor and Dinesh Manocha Abstract—We present an efficient algorithm for simulating diffuse reflections of sound in a static scene. Our approach is built on recent advances in precomputed light transport techniques for visual rendering and uses them to develop an improved acoustic radiance transfer technique. We precompute a direct-to-indirect acoustic transfer operator for a scene, and use it to map direct sound incident on the surfaces of the scene to multi- bounce diffuse indirect sound, which is gathered at the listener to compute the final impulse response. Our algorithm decouples the transfer operator from the source position so we can efficiently update the acoustic response at the listener when the source moves. We highlight its performance on various benchmarks and observe significant speedups over prior methods based on acoustic radiance transfer. Index Terms—sound propagation, radiosity, virtual reality, precomputed transport 1 I NTRODUCTION Sound rendering can augment visual render- ing and provide an enhanced spatial sense of presence. Some of the driving applications of sound rendering include video games, archi- tectural acoustics and VR simulations. The modeling of sound propagation effects needs to account for different wave propaga- tion phenomena such as specular reflections, diffuse reflections, edge diffraction and inter- ference. In this paper, we focus on model- ing diffuse reflections. Many objective [1], [2] and perceptual [3] studies have demonstrated the importance of diffuse reflections in sound propagation. Further, it is computationally challenging to model high orders of diffuse reflection. Hence, modeling diffuse reflections for sound propagation is an active area of interest in many interactive sound rendering applications. Sound propagation algorithms can be broadly classified into wave-based and geometric methods. Wave-based methods numerically solve the acoustic wave equation. However, their complexity is proportional to the volume of the scene and the fourth power of the maximum frequency of sound L. Antani, A. Chandak, M. Taylor and D. Manocha are with the Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599. E-mail: {lakulish,achandak,taylormt,dm}@cs.unc.edu simulated, therefore they can be very slow for large acoustic spaces or high frequency sound sources. Geometric methods approximate sound waves by rays. Two standard methods used to simulate diffuse sound reflections are based on ray (or volume) tracing and radiance transfer. Our approach is motivated by recent developments in global illumination based on precomputed light transport algorithms [4], [5], [6]. Specifically, our work is based on direct-to-indirect transfer algorithms for visual rendering, which map direct light incident on the surfaces of a scene to indirect light on the surfaces of the scene after multiple bounces. Main Results We present a new algorithm for modeling diffuse reflections of sound based on the direct-to-indirect transfer approach. The algorithm computes an acoustic transfer oper- ator in matrix form which is decoupled from both the source and the listener positions, and can efficiently update the acoustic response at the listener whenever the source moves. The algorithm approximates the transfer matrix using the singular value decomposition (SVD) to perform higher-order diffuse reflec- tions. We show that this approximation re- duces the memory requirements and increases the performance of our algorithm. We highlight the performance of our al- gorithm on various models. In practice, it is much faster than prior methods based on ra-

Direct-to-Indirect Acoustic Radiance Transfergamma-web.iacs.umd.edu/Sound/diffuse/2010-tvcg.pdf · 2019-01-25 · 1 Direct-to-Indirect Acoustic Radiance Transfer Lakulish Antani,

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Direct-to-Indirect Acoustic Radiance Transfergamma-web.iacs.umd.edu/Sound/diffuse/2010-tvcg.pdf · 2019-01-25 · 1 Direct-to-Indirect Acoustic Radiance Transfer Lakulish Antani,

1

Direct-to-Indirect Acoustic RadianceTransfer

Lakulish Antani, Anish Chandak, Micah Taylor and Dinesh Manocha

Abstract—We present an efficient algorithm for simulating diffuse reflections of sound in a static scene. Ourapproach is built on recent advances in precomputed light transport techniques for visual rendering and usesthem to develop an improved acoustic radiance transfer technique. We precompute a direct-to-indirect acoustictransfer operator for a scene, and use it to map direct sound incident on the surfaces of the scene to multi-bounce diffuse indirect sound, which is gathered at the listener to compute the final impulse response. Ouralgorithm decouples the transfer operator from the source position so we can efficiently update the acousticresponse at the listener when the source moves. We highlight its performance on various benchmarks andobserve significant speedups over prior methods based on acoustic radiance transfer.

Index Terms—sound propagation, radiosity, virtual reality, precomputed transport

F

1 INTRODUCTION

Sound rendering can augment visual render-ing and provide an enhanced spatial sense ofpresence. Some of the driving applications ofsound rendering include video games, archi-tectural acoustics and VR simulations.

The modeling of sound propagation effectsneeds to account for different wave propaga-tion phenomena such as specular reflections,diffuse reflections, edge diffraction and inter-ference. In this paper, we focus on model-ing diffuse reflections. Many objective [1], [2]and perceptual [3] studies have demonstratedthe importance of diffuse reflections in soundpropagation. Further, it is computationallychallenging to model high orders of diffusereflection. Hence, modeling diffuse reflectionsfor sound propagation is an active area ofinterest in many interactive sound renderingapplications.

Sound propagation algorithms can bebroadly classified into wave-based andgeometric methods. Wave-based methodsnumerically solve the acoustic wave equation.However, their complexity is proportionalto the volume of the scene and the fourthpower of the maximum frequency of sound

• L. Antani, A. Chandak, M. Taylor and D. Manocha are withthe Department of Computer Science, University of NorthCarolina at Chapel Hill, Chapel Hill, NC 27599.E-mail: lakulish,achandak,taylormt,[email protected]

simulated, therefore they can be very slow forlarge acoustic spaces or high frequency soundsources. Geometric methods approximatesound waves by rays. Two standard methodsused to simulate diffuse sound reflections arebased on ray (or volume) tracing and radiancetransfer. Our approach is motivated by recentdevelopments in global illumination basedon precomputed light transport algorithms[4], [5], [6]. Specifically, our work is based ondirect-to-indirect transfer algorithms for visualrendering, which map direct light incident onthe surfaces of a scene to indirect light on thesurfaces of the scene after multiple bounces.

Main Results We present a new algorithm formodeling diffuse reflections of sound based onthe direct-to-indirect transfer approach. Thealgorithm computes an acoustic transfer oper-ator in matrix form which is decoupled fromboth the source and the listener positions, andcan efficiently update the acoustic response atthe listener whenever the source moves.

The algorithm approximates the transfermatrix using the singular value decomposition(SVD) to perform higher-order diffuse reflec-tions. We show that this approximation re-duces the memory requirements and increasesthe performance of our algorithm.

We highlight the performance of our al-gorithm on various models. In practice, it ismuch faster than prior methods based on ra-

Page 2: Direct-to-Indirect Acoustic Radiance Transfergamma-web.iacs.umd.edu/Sound/diffuse/2010-tvcg.pdf · 2019-01-25 · 1 Direct-to-Indirect Acoustic Radiance Transfer Lakulish Antani,

2

diance transfer. To the best of our knowledge,it is the first approach that can perform highorders of diffuse reflections in static sceneswith moving sources and listeners at almostinteractive rates.

The rest of this paper is organized as fol-lows. We briefly survey related work in Sec-tion 2, and discuss mathematical prerequisitesin Section 3. We present our algorithm inSection 4 and discuss implementation detailsin Section 5. We present experimental resultsin Section 6 and conclude in Section 7.

2 RELATED WORK

Wave-based Acoustics The propagationof sound in a medium is described by theacoustic wave equation, a second-order partialdifferential equation [7]. Several techniquesare known for numerically solving the waveequation [8], [9] and accurately modelingsound propagation in a scene. Diffusereflections can be modeled by performing thesimulation on a grid fine enough to capturethe detailed “roughness” of the surfacesthat results in acoustic wave scattering[10]. However, despite recent advances [9],these methods can be rather slow and aremainly limited to scenes with static sources.Precomputation-based methods have recentlybeen developed [11] that use a numericalwave equation solver to compute the acousticresponse of a scene from several sampledsource positions; at run-time these responsesare interpolated given the actual position of amoving source. These methods are fast, butrequire large amounts of precomputed data.

Geometric Acoustics Most interactive systemsmodel sound waves in terms of rays or 3D vol-umes. These geometric acoustics techniquescannot accurately solve the wave equation,and cannot easily model all kinds of propa-gation effects, but allow efficient simulationof early reflections. However, geometric tech-niques have trouble handling some acous-tic phenomena such as finite-size diffractingedges, or absorbers with complex boundaryconditions.

Methods based on ray tracing [12], [13] canmodel both diffuse and specular reflections ofsound. Since early specular reflections providethe listener with important directional cues,

specialized techniques have been developedfor modeling specular reflections, such asvolume tracing [14], [15] and the image sourcemethod [16], [17]. For static scenes, whichfrequently arise in architectural acoustics andvirtual environments, algorithms based onacoustic radiosity [18], [19], [20] or radiancetransfer methods can be used to modelreflections from surfaces with arbitrarybidirectional reflectance distribution functions(BRDFs) [21], [22]. Many techniques have alsobeen designed to model edge diffraction [23],[24], [25].

Precomputed Light Transport Radiosity [26]is the classic precomputed light transport al-gorithm. However, it computes a full solutionthat has to be recomputed whenever the lightsource moves. In contrast, precomputed radiancetransfer (PRT) algorithms decouple light trans-port effects from the light source configurationby computing a linear operator that defineshow a variable light source configuration af-fects the radiances at surface sample points.PRT techniques can support both distant [4],[27] and local [28] source configurations.

Direct-to-indirect transfer algorithms [5], [6]are one class of precomputed light transportalgorithms. These algorithms compute linearoperators which map direct light incident onthe surface samples to multi-bounce indirectlight at the samples. They are designed tohandle diffuse reflections, and some of themcan also support limited glossy reflections.Our approach is based on applying these ideasto sound propagation.

3 PRELIMINARIES

This section briefly describes the mathematicalbackground on which our algorithm is based.

3.1 Sound Rendering vs. Visual Rendering

Light transport simulation is concerned withthe steady-state values of radiance over thesurface of the scene, since light travels fastenough (3 × 108 m/s) that transient radiancevalues are not observed and can be ignored.However, the speed of sound in air is muchslower (340 m/s), and hence it is importantto compute time-varying radiances over thesurface.

Page 3: Direct-to-Indirect Acoustic Radiance Transfergamma-web.iacs.umd.edu/Sound/diffuse/2010-tvcg.pdf · 2019-01-25 · 1 Direct-to-Indirect Acoustic Radiance Transfer Lakulish Antani,

3

Fig. 1. Parts of a typical impulse response [30](absolute values shown). The early response isdominated by specular reflections and diffrac-tion; the late reverberation is dominated bydiffuse reflections [29].

Furthermore, sound waves have muchlarger wavelengths than light waves, andare comparable in size to obstacles in typi-cal architectural and game scenes. Therefore,diffraction plays an important role in soundpropagation, and it must be modeled in orderto generate plausible sounds.

The computation of sound propagationpaths takes into account the knowledge ofsound sources, listener locations, the 3Dmodel of the environment, material absorptionand scattering properties, and air absorption.Given the signal emitted by a sound source(i.e., a time-varying pressure wave), the sig-nal heard by a listener (after modeling re-flections, diffractions and interference) is typi-cally computed using impulse responses (IRs).An IR is the signal received at the listenerif the source emits a unit impulse at t = 0.Since room acoustics are modeled as a lineartime-invariant system [29], the response at thelistener to an arbitrary source sound can becomputed by convolving the source soundwith the IR.

IRs can be divided into three parts [30]: (a)direct, sound arriving directly from the source;(b) early response, sound reaching the listenersoon after the direct sound, via a small num-ber of reflections or diffractions; and (c) latereverberation, the gradual decay of amplitudeafter the early response (see Figure 1). Latereverberation gives an impression of the sizeof an acoustic space, and is mostly composedof diffuse reflections [29], therefore in this

paper we focus on computing higher-orderdiffuse reflections; our approach can be usedin tandem with existing approaches for mod-eling specular reflections and edge diffraction.

3.2 Impulse Response Representation

The IR at a point is a function of time. At-tenuation and delay can be applied using aunified formulation [22] by representing ourIRs in Fourier space. For a continuous func-tion f : [0, T ] → R, the Fourier transformprojects f into a sinusoidal basis (with basisvectors of the form eιωit). For discrete signals,we compute the Discrete Fourier Transform(DFT) using the Fast Fourier Transform (FFT)algorithm. The signal is discretized into Ntime-domain samples, where the value of N ischosen based on the desired audio samplingfrequency and the length of the IR modeled(which could be tuned based on the expectedreverberation time of a room).

Since the Fourier transform is linear, at-tenuations and accumulation of IRs can beperformed easily (n denotes a discrete sampleindex):

F(af1(n) + bf2(n)) = aF(f1(n)) + bF(f2(n)).(1)

Unlike in the time domain, in the frequencydomain delays can also be applied using ascale factor, since the Fourier basis vectorsare eigenvectors of linear time-invariant op-erators:

F(f(n−∆n)) = e−ιω∆nF(f(n)). (2)

Note that care must be taken to ensure that thedelays align on time-domain sample bound-aries, otherwise the inverse Fourier transformwill contain non-zero imaginary parts. An al-ternative solution to this issue would be to usefractional delays [31]; we choose to addressthis in future work in the interests of simplic-ity of implementation.

A unit impulse emitted by the source at timet = 0 has all Fourier coefficients set to 1. Com-puting the IR using the above expressions fordelay and attenuation results in a frequency-domain signal. Computing the inverse Fouriertransform of this signal using the frequency-domain replication method described by Silta-nen et al. [22] yields a periodic function whichis approximately equal to the time-domain IR

Page 4: Direct-to-Indirect Acoustic Radiance Transfergamma-web.iacs.umd.edu/Sound/diffuse/2010-tvcg.pdf · 2019-01-25 · 1 Direct-to-Indirect Acoustic Radiance Transfer Lakulish Antani,

4

at the listener. Note that this method does notcompute the steady-state acoustic response, butthe time-varying impulse response. The key tothis is the frequency-domain delay equationsdescribed above.

3.3 Acoustic Rendering Equation

The propagation of sound in a scene can bemodeled using an extension of the standardgraphics rendering equation [32], called theacoustic rendering equation [21]:

L(x,Ω) = L0(x,Ω) (3)

+

∫S

R(x, x′,Ω)L

(x′,

x− x′

|x− x′|

)dx′

where L is the total outgoing radiance, L0 isthe emitted radiance and R is the reflection ker-nel, which describes how radiance at point x′

influences radiance at point x. Ω is the exitantradiance direction at x; the incident radiancedirection at x is implicit in the specification ofx′:

R(x, x′,Ω) = ρ(x, x′,Ω)G(x, x′)V (x, x′)P (x, x′).(4)

Here, ρ is the BRDF of the surface at x, Gis the form factor between x and x′, V isthe point-to-point visibility function, and Pis a propagation term [21] that accounts forpropagation delays (as per Equation 2).

The radiances in Equation 3 are IRs; the timevariable t is hidden for the sake of brevity.This added dimension of time complicatesthe storage and processing requirements ofalgorithms based on the acoustic renderingequation.

4 ALGORITHM

Our algorithm provides two main improve-ments over the state-of-the-art acoustic radi-ance transfer algorithms: (a) we decouple thesource position from the precomputed databy computing an acoustic transfer operator asopposed to simply precomputing the IRs atsurface samples due to a sound source as perthe method of Siltanen et al. [22]; and (b) weuse the SVD to compress the transfer operatorand quickly compute higher-order reflections.The rest of this section details how our algo-rithm achieves these improvements over thestate-of-the-art.

Our overall approach is as follows (see Fig-ure 2 and Algorithms 1 and 2):• Preprocessing. We sample the surface of

the scene and compute a transfer oper-ator which models one or more ordersof diffuse reflections of sound among thesurface samples.

• Run-time. First, we shoot rays from thesource to determine the direct IR at eachsurface sample. Next, we apply the trans-fer operator to the direct response toobtain the indirect response. Finally, weshoot rays from the listener and gatherthe direct and indirect responses fromeach surface sample hit by a ray. These areadded to obtain the final IR at the listener.

Algorithm 1 PreprocessingP ← set of samples on scene surfacepi denotes the ith element of PT← 0for all i ∈ [0, |P | − 1] do

for all l ∈ [0, Nrays] dor ← random path traced from pipj ← final sample hit by rTij+ = IR contribution along r

end forend for

Algorithm 2 Run-timel0 ← direct IR from source at each sampleln ← T · l0IR← gather from (l0 + ln)

4.1 Acoustic Transfer OperatorThe acoustic transfer operator is expressedover a set of p samples chosen over the surfaceof the scene. The transfer operator is com-puted in terms of the responses at all surfacesamples to impulses emitted from every othersurface sample. We use Fourier coefficients torepresent the sample-to-sample IRs. Let therebe f Fourier coefficients per surface sample.All subsequent computations are performedon each Fourier coefficient independently.

For each frequency ωm, we define the acous-tic radiance vector l(ωm), which contains pelements that represent the mth Fourier co-efficients of the IRs at each surface sample.

Page 5: Direct-to-Indirect Acoustic Radiance Transfergamma-web.iacs.umd.edu/Sound/diffuse/2010-tvcg.pdf · 2019-01-25 · 1 Direct-to-Indirect Acoustic Radiance Transfer Lakulish Antani,

5

Fig. 2. Overview of our algorithm. In a precomputation step, we sample the surfaces on thescene, and compute a one-bounce transfer operator for these samples (T). We then use theSVD to compute the modes of the transfer operator. At runtime, we shoot rays from the source(which may move freely) and compute direct IRs at the surface samples. We then apply thetransfer operator (with a user-specified number of modes retained) repeatedly to quickly obtainthe multi-bounce indirect IRs at the surface samples. We then compute the final IR at the listenerposition in a final gathering step.

For the sake of brevity, we shall omit theparameter ωm from the equations in the restof the paper as it may be obvious from thecontext.

The Neumann series expansion of Equation3 expressed in matrix form is:

ln+1(ωm) = T(ωm)ln(ωm), (5)

where ln(ωm) contains the mth Fourier co-efficients of the IRs at each surface sampleafter n reflections. The transfer matrix T(ωm)models the effect of one diffuse reflection. The(i, j)th element of T(ωm) describes how themth Fourier coefficient at surface sample jaffects the mth Fourier coefficient at surfacesample i after one diffuse reflection. The en-tries in row i of T are computed by tracingpaths sampled over the hemisphere at surfacesample i; the delays and attenuations alongeach path terminating at any other surfacesample j are added to the entry Tij [22]. Wecan compute a multi-bounce transfer operatorwith n orders of reflection as the matrix sumTn = T + T2 + · · ·+ Tn.

Existing acoustic radiance transfer algo-rithms [22] implicitly apply the transfer opera-tor by performing path tracing from the source

and precomputing the IR at each surface sam-ple after several orders of reflection. This ap-proach has the disadvantage of having to re-peat the entire process if the source moves. Weeliminate this disadvantage by precomputingT, and multiplying it with the direct responseof the source at run-time. This decouplingof the source position from the precomputeddata allows rapid updates of the IR at thelistener whenever the source moves.

4.2 Transfer Operator CompressionTo apply the transfer operator once, thematrix-vector multiplication in Equation 5needs to be performed once per Fourier coef-ficient at run-time. However, even for scenesof moderate complexity, the number of surfacesamples, p, can be very large. Since T is a p×pmatrix and ln is a p× 1 vector, this step takesO(p2) time per Fourier coefficient per orderof reflection, which can quickly become quiteexpensive. We use the SVD to compute a rankk approximation of T. This allows us to reducethe complexity to O(pk).

Intuitively, truncating T to k modes usingthe SVD removes some of the high spatialfrequencies in the transfer operator. A lower-

Page 6: Direct-to-Indirect Acoustic Radiance Transfergamma-web.iacs.umd.edu/Sound/diffuse/2010-tvcg.pdf · 2019-01-25 · 1 Direct-to-Indirect Acoustic Radiance Transfer Lakulish Antani,

6

order mode of T might model reflectionsfrom an entire wall, while higher-order modesmight model details added to the acousticresponse due to local variations in the wall’sgeometry (such as a painting on the wall). Ineffect, the parameter k can be used to controlthe level-of-detail of the acoustic response.

As we shall discuss in Section 7, thereare use cases where we wish to precomputea one-bounce transfer operator and apply itrepeatedly to obtain higher-order reflections.In such cases, the cost of computing transfermatrices that represent additional bounces canbe further reduced to O(k2) by precomputingappropriate matrices as follows. The direct IRsat each surface sample are stored in the vectorl0. Suppose we have a rank k approximationof T, given by T = USV

T, where U is a p×k

matrix, S is a k×k diagonal matrix and VT isa k × p matrix. Then the first-order IR at eachsurface sample is given by:

Tl0 = USVTl0

= Ub

where b = SVTl0 is l0 projected into the span

of the first k right singular vectors of T. Thesecond-order response is:

TTl0 = U(SVTU)SV

Tl0

= UDb

where D = SVTU is essentially the one-

bounce operator in the k-dimensional sub-space spanned by the singular vectors corre-sponding to the top k singular values of T.The cost of multiplying b by D is simplyO(k2). Notice that the third-order response canbe written as UD2b, and so on. This allowsus to compute higher-order responses using ak × k matrix instead of a p× p matrix.

5 IMPLEMENTATION

Our implementation is CPU-based, and usesMicrosoft DirectX 9 for visualization, and IntelMath Kernel Library (MKL) for the matrixoperations.

5.1 ApproximationsOur algorithm allows for the following user-controlled approximations:

Surface Samples We parameterize the scenesurface by mapping the primitives to the unitsquare (a uv texture mapping) using LeastSquares Conformal Mapping (LSCM) [33].The user specifies the texture dimensions;each texel of the resulting texture is mappedto a single surface sample using an inversemapping process. The number of texelsmapped to a given primitive is weightedby the area of the primitive, to ensure aroughly even distribution of samples. Wechose the LSCM algorithm for this purposesince our modeling tools (Blender 1) have animplementation built-in; it can be replacedwith any other technique for sampling thesurfaces as long as the number of samplesgenerated on a primitive is proportional to itsarea.

Frequency Samples We allow the user tovary the number of Fourier coefficients usedto represent the IRs. We use 1K Fouriercoefficients in all our experiments, since ithas been shown [22] that this provides anacceptable compromise between performanceand quality.

Transfer Operator Modes The SVD approxi-mation error of the transfer operator is mea-sured using the Frobenius norm. Figure 3plots the error against the number of modesretained in the transfer operator. The figuresuggests that we could potentially use a verysmall number of modes to compute IRs withdiffuse reflections at runtime. Figure 4 plotsthe SVD approximation error (at 50 modes)with increasing orders of reflection. The figureclearly shows that the error introduced bythe SVD approximation for higher orders ofreflection quickly converges. In other words,the IR energy due to higher-order reflectionscan be modeled using very few SVD modesof the transfer operator. This matches the in-tuition that higher-order reflections have lowspatial frequency. As a result, when computingvery high orders of reflection (say 50), we canuse very few SVD modes beyond the first 2-3orders while still capturing the higher orderenergy (which must be captured to model thelate reverberation tail of the IR) accurately.

1. http://www.blender.org

Page 7: Direct-to-Indirect Acoustic Radiance Transfergamma-web.iacs.umd.edu/Sound/diffuse/2010-tvcg.pdf · 2019-01-25 · 1 Direct-to-Indirect Acoustic Radiance Transfer Lakulish Antani,

7

Fig. 3. SVD approximation error for transfer operators. For each benchmark scene, the plotsshow the relative Frobenius norm error of rank-k approximations of T (for one value of ω) for allpossible values of k. From left to right: (a) Room (252 samples), (b) Hall (177 samples), (c) Sigyn(1024 samples).

Fig. 4. SVD approximation error for eachhigher order of reflection, for the Sigyn scene(see Figure 5).

5.2 Audio Processing

The algorithm presented in Section 4computes a frequency domain energy IRwith 1K Fourier coefficients. The pressureIR is computed from the energy IR [34] andupsampled to encode the desired propagationdelay in the IR [22].

Moving Sources and Listeners: In typicalvirtual environment applications, the sourceand listener tend to move and the audio isstreamed from the source in chunks of audiosamples (called audio frames). The frame sizeis determined by the allowed latency for theapplication. We choose audio frames of 4800samples at a sampling rate of 48KHz, leadingto a 100ms latency in our audio output. For astatic source and listener, computing the finalaudio is trivial and amounts to convolvingeach audio frame with the IR to computeoutput audio frames. For moving sources andlisteners, IRs evolve over time which couldlead to discontinuities in the final audio whenusing different IRs for two adjacent audio

frames. In order to minimize such discontinu-ity artifacts, windowing [35] is applied at thesource frame and the listener frame when thesource and listener are moving respectively.We use a windowing method similar to Sil-tanen et al. [22].

Note that the audio used in the accompa-nying video is generated by convolving thedry input audio with the listener IR. Ideally,one would apply the listener’s Head-RelatedTransfer Function (HRTF) in order to modelthe shape of the listener’s head and inter-reflections due to the head and shoulders.However, this step has been skipped in thevideo for simplicity, especially since HRTFsare not related to our main contributions.

6 EXPERIMENTSWe now present some experimental results.All tests were performed on an Intel XeonX5560 workstation with 4 cores (each operat-ing at 2.80 GHz) and 4GB of RAM runningWindows Vista. We report timings for all 4cores since MKL automatically parallelizes ourmatrix operations over all cores of the testmachine. We have benchmarked our imple-mentation on three scenes whose complexity istypical of scenes encountered in acoustics ap-plications. Figure 5 shows these scenes alongwith some details.

For comparison, we chose the state-of-the-art frequency-domain acoustic radiance trans-fer algorithm [22]. To the best of our knowl-edge, the only other algorithms that model dif-fuse sound reflections are time-domain radios-ity and path tracing. Since time-domain ra-diosity requires a prohibitive amount of mem-ory, we chose not to compare against it. Path

Page 8: Direct-to-Indirect Acoustic Radiance Transfergamma-web.iacs.umd.edu/Sound/diffuse/2010-tvcg.pdf · 2019-01-25 · 1 Direct-to-Indirect Acoustic Radiance Transfer Lakulish Antani,

8

Fig. 5. Benchmark scenes. From left to right: (a) Room (252 samples), (b) Hall (177 samples),(c) Sigyn (1024 samples).

Scene Surface Precomputation Time Modes RuntimeSamples T SVD Initial Scatter Transfer Operator Final Gather

10 43.2 ms 24.0 ms 33.7 msRoom 252 14.2 s 94.5 s 25 45.8 ms 43.8 ms 35.0 ms

50 42.4 ms 84.3 ms 36.4 ms

10 37.8 ms 26.8 ms 31.5 msHall 177 13.1 s 93.1 s 25 37.1 ms 45.5 ms 30.2 ms

50 36.6 ms 79.7 ms 31.2 ms

Sigyn 1024 6.31 min 50.9 min 50 164.1 ms 218.1 ms 109.9 ms

TABLE 1Performance characteristics of our algorithm. For each scene, we present the precomputation

time required by our algorithm for 1K Fourier coefficients. Under precomputation time, we showthe time required to compute the transfer operator, T, and the time required to compute its SVD

approximation. We also compare running times for varying numbers of modes from the SVD.The table shows the time spent at runtime in initial shooting from the source, applying the

transfer operator, and gathering the final IR at the listener position.

tracing, while well-suited for dynamic scenes,requires even static scenes to be traversedmillions of times per frame for higher-orderreflections. Part of the reduction in complexity(and the memory usage) for the frequency-domain approach, is due to the restriction to arelatively small number of Fourier coefficients.

Frequency-domain acoustic radiance trans-fer (ART) [22] computes the transfer operator(without any SVD approximation) and itera-tively applies it to the direct acoustic responseuntil the solution converges. In order to per-form a fair comparison, we restrict ART tocomputing as many orders of reflection as ouralgorithm.

Table 1 summarizes the performance of theprecomputation and run-time stages of our al-gorithm. The run-time complexity depends onthe number of modes retained during the SVDapproximation; the table clearly highlights thisdependency. As shown by the table, our al-gorithm very efficiently updates IRs whenthe source position changes at run-time. Notethat we precompute a one-bounce transfer

operator, and use the approach described inSection 4.2 to compute higher-order reflectionsat run-time. Depending on the application, wecould also precompute a multi-bounce opera-tor and apply it directly at run-time, furtherimproving our performance. Our implementa-tion uses the more flexible approach of vary-ing the orders of reflection at runtime. As aresult, it is possible to further improve theperformance of our implementation.

Table 2 shows the benefit of the SVD incompressing the transfer operator. The tableshows that without using SVD, the transferoperators may be too large to be used oncommodity hardware. For the uncompressed(“reference”) case, the transfer operator size isp × p, for each Fourier coefficient (1K in ourcase). For the compressed (“50 Modes”) case,the transfer operator size is p×k for U, k×k forD and k × p for SV

T, where k is the number

of modes retained. In the table, k = 50, and pis the number of surface samples in the scene.

Table 3 compares the run-time performanceof our method and ART. The table shows

Page 9: Direct-to-Indirect Acoustic Radiance Transfergamma-web.iacs.umd.edu/Sound/diffuse/2010-tvcg.pdf · 2019-01-25 · 1 Direct-to-Indirect Acoustic Radiance Transfer Lakulish Antani,

9

Scene Samples Reference 50 Modes

Hall 177 250.6 161.6Room 252 508.0 221.6Sigyn 1024 8388.6 839.2

TABLE 2Memory requirements of the transfer operators

computed by our algorithm with (column 4)and without (column 3) SVD compression.

Note that since the entries of each matrix arecomplex numbers, each entry requires 8 bytes

of storage. All sizes in the table are in MB.

the time required to update IRs at the lis-tener when the source is moved. The tableclearly shows the advantage of our approach.Since our precomputed transfer operator isdecoupled from the source position, movingthe source does not require recomputing thetransfer operator, allowing the source positionto be updated much faster than would bepossible with ART.

Table 3 can also be used to derive the per-formance of our algorithm for the case whena multi-bounce transfer operator is precom-puted. For example, suppose we precomputea transfer operator with 10 orders of reflectionfor the Sigyn scene. Then the run-time costwould be the same as that of the one-bounceoperator, i.e., 468.5 ms. The difference, i.e.(512.8ms − 468.5ms) × 1024 = 45.4s wouldbe the additional time spent during prepro-cessing to derive the multi-bounce operatorfrom the one-bounce operator (the factor of1024 arises due to the fact that the timingsin Table 3 are for matrix-vector multiplication,whereas precomputing the multi-bounce op-erator from the one-bounce operator requiresmatrix-matrix multiplications).

Figure 6 compares the output of our al-gorithm and ART. The figure shows squaredIRs, smoothed using a moving-average low-pass filter, for different numbers of modes.As the figure shows, reducing the numberof modes significantly (down to 50 modes)has very little effect; however, if far fewermodes are used, significant errors appear inthe energy decays, as expected. Coupled withthe memory savings demonstrated in Table 2and performance advantage demonstrated inTable 3, we see that using the SVD allows us

to significantly reduce memory requirementsand increase performance without significantdegradation of the computed IRs. Along withthe plots, Figure 6 shows RT60 (reverberationtime) values estimated from the decay curves.The data demonstrates that SVD approxima-tion upto 50 modes does not lead to significantchange in reverberation time. Of course, thebest way to demonstrate the benefit of ourapproach is by comparing audio clips; forthis we refer the reader to the accompanyingvideo.

7 CONCLUSIONWe have described a precomputed direct-to-indirect transfer approach to solving theacoustic rendering equation in the frequencydomain for diffuse reflections. We havedemonstrated that our approach is able toefficiently simulate diffuse reflections for amoving source and listener in static scenes.In comparison with existing methods, our ap-proach offers a significant performance advan-tage when handling moving sources.

7.1 AnalysisOur algorithm is designed to model purelydiffuse reflections of sound. However, it isalso important to model specular reflectionsand diffraction of sound in order to renderplausible audio. Our algorithm is intended tobe used in tandem with existing algorithms formodeling specular reflections and diffraction.

Another important design decision is howwe compute the transfer operator. There areseveral possible use-cases:

1) Precompute a multi-bounce transfer op-erator, using multi-bounce path tracingfrom each surface sample. This is themost accurate option, but requires highprecomputation time. The run-time costis equal to that of applying a one-bounceoperator.

2) Precompute a multi-bounce transfer op-erator by multiplying a one-bounce op-erator with itself repeatedly (before com-puting an SVD).

3) Precompute the SVD approximation ofthe one-bounce transfer operator. Usethe method described in Section 4.2to quickly precompute an approximatemulti-bounce operator.

Page 10: Direct-to-Indirect Acoustic Radiance Transfergamma-web.iacs.umd.edu/Sound/diffuse/2010-tvcg.pdf · 2019-01-25 · 1 Direct-to-Indirect Acoustic Radiance Transfer Lakulish Antani,

10

Scene Orders Radiance Transfer Direct-to-Indirect Transfer(50 modes)

2 11.7 s 131.8 msRoom 5 11.8 s 154.4 ms

10 12.0 s 179.3 ms

2 10.6 s 116.5 msHall 5 10.7 s 147.3 ms

10 10.9 s 170.5 ms

2 185.3 s 468.5 msSigyn 5 186.7 s 491.2 ms

10 188.7 s 512.8 ms

TABLE 3Comparison of our approach with ART. We compare the time required by our algorithm to

update the source position and recompute the IR at the listener position after a varying numberof diffuse reflections. Since ART does not decouple the transfer operator from the source

position, it needs to perform a costly recomputation of the transfer operator each time the sourcemoves. On the other hand, our algorithm quickly updates the direct IR at all surface samples,then applies the precomputed transfer operator. This allows our approach to handle moving

sources far more efficiently than ART.

Fig. 6. Comparison of diffuse IRs (30 orders of reflection, absorption coefficient 0.2) computedby our system with and without SVD compression, for some of our benchmark scenes. The plotsshow squared IRs, smoothed using a moving-average low-pass filter.

4) Precompute the SVD approximation ofthe one-bounce transfer operator. At run-time, the orders of reflection can be eas-ily adjusted, perhaps based on computebudget or sound quality.

Use-case (3) allows sound designers torapidly adjust the orders of reflection bakedinto the precomputed transfer operator. Forexample, one could first compute a multi-bounce operator with 3 orders of reflection.If the resulting audio at run-time sounds un-satisfactory, the precomputed data can quicklybe updated with additional orders of reflectionwithout any further ray tracing or SVD com-putation.

In all other use-cases besides (3), the SVD

allows the IR accuracy to be traded off for per-formance, providing adjustable level-of-detailfor sound rendering.

7.2 Limitations

Since ours is a precomputation-based algo-rithm, it cannot be used for scenes with dy-namic objects. In such situations, ray-tracing-based algorithms are the best available choice.However, in many applications, includinggames and virtual environments, scenes aremostly static, with relatively few movingparts, hence our algorithm can be used tomodel reflections within the static portions ofthe scene.

Page 11: Direct-to-Indirect Acoustic Radiance Transfergamma-web.iacs.umd.edu/Sound/diffuse/2010-tvcg.pdf · 2019-01-25 · 1 Direct-to-Indirect Acoustic Radiance Transfer Lakulish Antani,

11

Our algorithm performs matrix-vector mul-tiplications on large matrices at runtime. Thematrix size depends on the size and com-plexity of the scene. Therefore, our methodis useful mainly for scenes of low to mediumcomplexity.

Another limitation arises from the approachwe use [22] to reconstruct the energy responsefrom the subsampled Fourier coefficients. Thereplication of Fourier coefficients leads tocomb-filter artifacts in the final audio, andis an inherent limitation of the reconstructionapproach. An alternative would be to treat theFourier coefficients as defining the envelopeof a noise process [36]. Both these approachesare prone to errors; further study is needed todetermine the suitability of one over the otherbased on empirical and perceptual error.

Finally, the transfer matrix is computedusing the acoustic rendering equation [21],which has its own limitations, in that it isan energy-based approach (and hence cannoteasily model interference) and is based on ge-ometric approximations to the acoustic waveequation (and hence cannot easily model low-frequency wave effects such as diffraction).

7.3 Future WorkIt is crucial to develop an approach to reducethe storage requirements of IRs in order tomake it feasible to implement our algorithmon hardware such as video game consoles.Moreoever, the Fourier domain representation,and the replication-based inverse transformwe use to reconstruct IRs [22] lead to errorsin the high-frequency component of the finalaudio. Specifically, the high frequencies do notdecay as quickly as they should relative tothe low frequencies. It would be very use-ful to determine alternative IR representationsthat are both memory-efficient and permit amore accurate reconstruction. The reconstruc-tion process can also lead to some noise andother artifacts. While these can be cleaned upin a post-processing step, preliminary researchon using alternative IR representations suchas wavelets has given us results without theseartifacts. However, the use of wavelets to rep-resent IRs requires some further research.

Our direct-to-indirect transfer approach isintended to be used in tandem with existingalgorithms (such as ray tracing, image source

method, or numerical methods) for modelingspecular reflections and edge diffractions. Weare currently researching into ways of model-ing paths involving both diffuse and specularreflections by modifying our initial shootingand final gathering steps to include specularreflections. We also hope to model paths in-cluding diffractions using a similar approach,since we believe that a tight integration be-tween existing algorithms for specular reflec-tions and edge diffraction, and our direct-to-indirect transfer approach will enable plausi-ble sound propagation for interactive appli-cations with moving sources and listeners atnear-interactive rates.

In most complex scenes, each surface sam-ple may influence only a few other samples,due to occlusions. We could subdivide thescene into cells separated by portals, computetransfer operators for each cell independently,and model the interchange of sound energy atthe portal boundaries. Cells and portals havebeen previously used to model late reverbera-tion [36], and would be a promising researchdirection for acoustic radiance transfer.

The acoustic response is typically a smoothfunction over the surfaces of the scene. There-fore, it would be beneficial to exploit spatialcoherence by projecting the transfer operatorinto basis functions defined over the surfacesof the scene.

Some radiance transfer algorithms in graph-ics [4], [5] can model glossy reflections byusing a directional basis such as sphericalharmonics (SH) at each surface sample. Sucha strategy can also be applied to model glossyreflections and diffraction of sound, however,the memory requirements for such an ap-proach might be prohibitive.

Note that we provide a framework whichallows the user to decide how many orders ofreflection to simulate; how to choose this pa-rameter appropriately is an interesting avenuefor future work.

REFERENCES

[1] B.-I. Dalenback, M. Kleiner, and P. Svensson, “Amacroscopic view of diffuse reflection,” J. Audio Eng.Soc., vol. 42, pp. 793–807, 1994.

[2] B.-I. Dalenback, “The Importance of Diffuse Reflec-tion in Computerized Room Acoustic Prediction andAuralization,” in Proceedings of the Institute of Acous-tics (IOA), vol. 17, no. 1, 1995, pp. 27–33.

Page 12: Direct-to-Indirect Acoustic Radiance Transfergamma-web.iacs.umd.edu/Sound/diffuse/2010-tvcg.pdf · 2019-01-25 · 1 Direct-to-Indirect Acoustic Radiance Transfer Lakulish Antani,

12

[3] R. R. Torres, M. Kleiner, and B.-I. Dalenback,“Audibility of ”diffusion” in room acoustics au-ralization: An initial investigation,” Acta Acusticaunited with Acustica, vol. 86, pp. 919–927(9), Novem-ber/December 2000.

[4] P.-P. Sloan, J. Kautz, and J. Snyder, “Precomputedradiance transfer for real-time rendering in dy-namic, low-frequency lighting environments,” inSIGGRAPH ’02: Proceedings of the 29th annual confer-ence on Computer graphics and interactive techniques.New York, NY, USA: ACM, 2002, pp. 527–536.

[5] M. Hasan, F. Pellacini, and K. Bala, “Direct-to-indirect transfer for cinematic relighting,” in SIG-GRAPH ’06: ACM SIGGRAPH 2006 Papers. NewYork, NY, USA: ACM, 2006, pp. 1089–1097.

[6] J. Lehtinen, M. Zwicker, E. Turquin, J. Kontkanen,F. Durand, F. X. Sillion, and T. Aila, “A meshlesshierarchical representation for light transport,” inSIGGRAPH ’08: ACM SIGGRAPH 2008 papers. NewYork, NY, USA: ACM, 2008, pp. 1–9.

[7] P. Svensson and R. Kristiansen, “Computationalmodelling and simulation of acoustic spaces,” in22nd International Conference: Virtual, Synthetic, andEntertainment Audio, June 2002.

[8] R. Ciskowski and C. Brebbia, Boundary Element meth-ods in acoustics. Computational Mechanics Publica-tions and Elsevier Applied Science, 1991.

[9] N. Raghuvanshi, N. Galoppo, and M. C. Lin, “Ac-celerated wave-based acoustics simulation,” in SPM’08: Proceedings of the 2008 ACM symposium on Solidand physical modeling. New York, NY, USA: ACM,2008, pp. 91–102.

[10] F. Ihlenburg, Finite Element Analysis of Acoustic Scat-tering. Springer Verlag AG, 1998.

[11] N. Raghuvanshi, J. Snyder, R. Mehra, M. Lin, andN. Govindaraju, “Precomputed wave simulation forreal-time sound propagation of dynamic sources incomplex scenes,” in Proc. SIGGRAPH 2010 (To ap-pear), 2010.

[12] M. Vorlander, “Simulation of the transient andsteady-state sound propagation in rooms using anew combined ray-tracing/image-source algorithm,”The Journal of the Acoustical Society of America,vol. 86, no. 1, pp. 172–178, 1989. [Online]. Available:http://link.aip.org/link/?JAS/86/172/1

[13] B. Kapralos, M. Jenkin, and E. Milios, “Sonel map-ping: acoustic modeling utilizing an acoustic versionof photon mapping,” in In IEEE International Work-shop on Haptics Audio Visual Environments and theirApplications (HAVE 2004), 2004, pp. 2–3.

[14] A. Chandak, C. Lauterbach, M. Taylor, Z. Ren, andD. Manocha, “Ad-frustum: Adaptive frustum tracingfor interactive sound propagation,” IEEE Transactionson Visualization and Computer Graphics, vol. 14, no. 6,pp. 1707–1722, 2008.

[15] M. T. Taylor, A. Chandak, L. Antani, andD. Manocha, “Resound: interactive sound renderingfor dynamic virtual environments,” in MM ’09:Proceedings of the seventeen ACM internationalconference on Multimedia. New York, NY, USA:ACM, 2009, pp. 271–280.

[16] T. Funkhouser, I. Carlbom, G. Elko, G. Pingali,M. Sondhi, and J. West, “A beam tracing approachto acoustic modeling for interactive virtual environ-ments,” in Proc. of ACM SIGGRAPH, 1998, pp. 21–32.

[17] A. Chandak, L. Antani, M. Taylor, andD. Manocha, “Fastv: From-point visibilityculling on complex models,” Computer GraphicsForum, vol. 28, pp. 1237–1246(10), 2009. [On-

line]. Available: http://www.ingentaconnect.com/content/bpl/cgf/2009/00000028/00000004/art00020

[18] N. Tsingos and J.-D. Gascuel, “A general modelfor the simulation of room acoustics based onhierachical radiosity,” in ACM SIGGRAPH 97 VisualProceedings: The art and interdisciplinary programs ofSIGGRAPH ’97, ser. SIGGRAPH ’97. New York,NY, USA: ACM, 1997, pp. 149–. [Online]. Available:http://doi.acm.org/10.1145/259081.259236

[19] A. L. Bot and A. Bocquillet, “Comparison ofan integral equation on energy and the ray-tracing technique in room acoustics,” The Journalof the Acoustical Society of America, vol. 108,no. 4, pp. 1732–1740, 2000. [Online]. Available:http://link.aip.org/link/?JAS/108/1732/1

[20] E.-M. Nosal, M. Hodgson, and I. Ashdown,“Improved algorithms and methods for roomsound-field prediction by acoustical radiosityin arbitrary polyhedral rooms,” The Journalof the Acoustical Society of America, vol. 116,no. 2, pp. 970–980, 2004. [Online]. Available:http://link.aip.org/link/?JAS/116/970/1

[21] S. Siltanen, T. Lokki, S. Kiminki, and L. Savioja, “Theroom acoustic rendering equation.” The Journal of theAcoustical Society of America, vol. 122, no. 3, pp. 1624–1635, September 2007.

[22] S. Siltanen, T. Lokki, and L. Savioja, “Frequencydomain acoustic radiance transfer for real-timeauralization,” Acta Acustica united with Acustica,vol. 95, pp. 106–117(12), 2009. [Online]. Avail-able: http://www.ingentaconnect.com/content/dav/aaua/2009/00000095/00000001/art00010

[23] U. P. Svensson, R. I. Fred, and J. Vanderkooy, “Ananalytic secondary source model of edge diffractionimpulse responses ,” Acoustical Society of AmericaJournal, vol. 106, pp. 2331–2344, Nov. 1999.

[24] M. Taylor, A. Chandak, Z. Ren, C. Lauterbach, andD. Manocha, “Fast edge-diffraction for sound prop-agation in complex virtual environments,” in EAAAuralization Symposium, 2009.

[25] D. Schroder and A. Pohl, “Real-time hybrid simu-lation method including edge diffraction,” in EAAAuralization Symposium, 2009.

[26] C. M. Goral, K. E. Torrance, D. P. Greenberg, andB. Battaile, “Modeling the interaction of light be-tween diffuse surfaces,” SIGGRAPH Comput. Graph.,vol. 18, no. 3, pp. 213–222, 1984.

[27] Y.-T. Tsai and Z.-C. Shih, “All-frequency precom-puted radiance transfer using spherical radial basisfunctions and clustered tensor approximation,” ACMTrans. Graph., vol. 25, no. 3, pp. 967–976, 2006.

[28] A. W. Kristensen, T. Akenine-Moller, and H. W.Jensen, “Precomputed local radiance transfer forreal-time lighting design,” in SIGGRAPH ’05: ACMSIGGRAPH 2005 Papers. New York, NY, USA: ACM,2005, pp. 1208–1215.

[29] H. Kuttruff, Room Acoustics. Elsevier Science Pub-lishing Ltd., 1991.

[30] T. Funkhouser, N. Tsingos, and J.-M. Jot, “Survey ofmethods for modeling sound propagation in interac-tive virtual environment systems,” Presence, 2003.

[31] T. I. Laakso, V. Valimaki, M. Karjalainen, and U. K.Laine, “Splitting the unit delay - tools for fractionaldelay filter design,” IEEE Signal Processing Magazine,vol. 13, 1996.

[32] J. T. Kajiya, “The rendering equation,” in SIGGRAPH’86: Proceedings of the 13th annual conference on Com-puter graphics and interactive techniques. New York,NY, USA: ACM, 1986, pp. 143–150.

Page 13: Direct-to-Indirect Acoustic Radiance Transfergamma-web.iacs.umd.edu/Sound/diffuse/2010-tvcg.pdf · 2019-01-25 · 1 Direct-to-Indirect Acoustic Radiance Transfer Lakulish Antani,

13

[33] B. Levy, S. Petitjean, N. Ray, and J. Maillot, “Leastsquares conformal maps for automatic texture atlasgeneration,” ACM Trans. Graph., vol. 21, no. 3, pp.362–371, 2002.

[34] H. K. Kuttruff, “Auralization of Impulse ResponsesModeled on the Basis of Ray-Tracing Results,” Jour-nal of the Audio Engineering Society, vol. 41, no. 11,pp. 876–880, November 1993.

[35] A. V. Oppenheim, R. W. Schafer, and J. R. Buck,Discrete-Time Signal Processing, 2nd ed. Prentice Hall,January 1999.

[36] E. Stavrakis, N. Tsingos, and P. Calamia, “Topologicalsound propagation with reverberation graphs,” ActaAcustica/Acustica - the Journal of the European AcousticsAssociation (EAA), 2008. [Online]. Available: http://www-sop.inria.fr/reves/Basilic/2008/STC08

Lakulish Antani Lakulish Antanireceived Bachelor’s and Master’sdegrees in Computer Science fromthe Indian Institute of Technology(IIT) Bombay in 2008. He is cur-rently a Ph.D. student in the De-partment of Computer Science atthe University of North Carolinaat Chapel Hill. He has previouslybeen a research intern with INRIASophia-Antipolis and Disney Inter-

active Studios. His research interests include real-timerendering, ray tracing and sound rendering for interactiveapplications.

Anish Chandak Anish Chandakreceived the Bachelor’s degree inComputer Science and Engineer-ing (2006) from the Indian Instituteof Technology (IIT) Bombay, Indiaand the MS degree in ComputerScience (2010) from the Universityof North Carolina at Chapel Hill. Heis a Ph.D. student in the ComputerScience Department at the Univer-sity of North Carolina at Chapel Hill.

He has worked as a research intern at Microsoft Researchand Microsoft Technology Incubation. His research inter-ests include acoustics (sound synthesis, sound propaga-tion, and 3D audio rendering) and computer graphics (raytracing, visibility, and GPGPU).

Micah Taylor Micah Taylor re-ceived a BS in computer sci-ence from Rose-Hulman Institute ofTechnology in 2004. After workingin industry for several years, hereturned to academia to seek hisPh.D. He is currently a researchassociate at the University of NorthCarolina at Chapel Hill working inthe GAMMA research group. Hisresearch interests are interactive

sound propagation and real-time ray tracing.

Dinesh Manocha DineshManocha is currently the PhiDelta Theta/Mason DistinguishedProfessor of Computer Science atthe University of North Carolinaat Chapel Hill. He received hisPh.D. in Computer Science atthe University of California atBerkeley 1992. He has receivedJunior Faculty Award, Alfred P.Sloan Fellowship, NSF Career

Award, Office of Naval Research Young InvestigatorAward, Honda Research Initiation Award, HettlemanPrize for Scholarly Achievement. Along with his students,Manocha has also received 12 best paper and panelawards at the leading conferences on graphics, geometricmodeling, visualization, multimedia and high-performancecomputing. He is an ACM and AAAS Fellow. Manocha haspublished more than 300 papers in the leading conferencesand journals on computer graphics, geometric computing,robotics, and scientific computing and supervised 18 Ph.D.dissertations.