Mixed Fantasy: An Integrated System for Delivering ceh/Publications/Papers/... · Mixed Fantasy: An…

  • Published on
    25-Aug-2018

  • View
    212

  • Download
    0

Embed Size (px)

Transcript

<ul><li><p>Mixed Fantasy: An Integrated System for Delivering MR Experiences</p><p>Charles E. HughesSchool of Computer Science &amp;Media Convergence LaboratoryUniversity of Central Florida</p><p>ceh@cs.ucf.edu</p><p>Christopher B. StapletonDarin E. Hughes</p><p>Scott MaloMatthew OConnor</p><p>Media Convergence LaboratoryUniversity of Central Florida</p><p>{chris,darin,scott,matt}@mcl.ucf.edu</p><p>Paulius MicikeviciusComputer Science DepartmentArmstrong Atlantic State Univ.paulius@drake.armstrong.edu</p><p>1. IntroductionMixed Reality (MR), the landscape between the real,</p><p>the virtual and the imagined, creates challenges in science,technology, systems integration, user interface design,experimental design and the application of artisticconvention.</p><p>The primary problem in mixing realities (and not justaugmenting reality with overlays) is that most mediatechniques and systems are designed to provide totallysynthesized disembodied experiences. In contrast, MRdemands direct engagement and seamless interfacing withthe users sensory perceptions of the physical. Toachieve this, we have developed an integrated system forMR that produces synthesized reality using specializedcontent engines (story, graphics, audio and SFX) anddelivers content using hybrid multi-sensory systems thatdo not hinder the human perception of the real world.</p><p>Figure 1. Blend of virtual explosion on real window</p><p>The primary technical contributions we will discussinclude a unique hybrid application of real-timecomposting involving techniques of chroma-key mattes,image recognition, occlusion models, rendering andtracking systems. The primary integration contribution isa component-based approach to scenario delivery, thecenterpiece of which is a story engine that integratesmultimodal components provided by compelling 3-Daudio, engaging special effects and the realistic visualsrequired of a multimodal MR entertainment experience.The primary user interface contributions are a set of novel</p><p>interaction devices, including plush toys and a 3-DOFtabletop used for collaborative haptic navigation. Ourwork on human performance evaluation in multi-sensoryMR environments is just now starting, with plans forextensive testing by cognitive psychologists, usabilityengineers and team performance researchers using dataavailable from our delivery system. Our contributions toartistic convention include techniques derived from aconvergence of media and improvisational theater.</p><p>2. Rendering and RegistrationWhile most of our work transcends the specifictechnology used to capture the real scene and deliver themixed reality, some of our work in rendering andregistration is highly focused upon our use of a video see-through head mounted display (HMD). The specificdisplay/capture device used in our work is the CanonCoastar video see-through HMD [1]. Thus, this sectionshould be read with such a context in mind.</p><p>In our MR Graphics Engine, the images displayed inthe HMD are rendered in three stages. Registration isperformed during the first and the third stages andincludes properly sorting the virtual and real objects, aswell as detecting when a user selects a real/virtual object.</p><p>2.1. RenderingRendering is done in three stages. In the first, images</p><p>are captured by the HMD cameras, and processed tocorrect for lens distortion and scaling. Since the camerasprovide only RGB components for each pixel, an alphachannel is added by chroma-keying, as described in thesubsection below. Alpha-blending and multi-passrendering are used to properly composite the virtualobjects with the captured images. For example, a virtualcharacter passing by a portal is properly clipped.Additionally, where possible, the depth of real objects isdetermined by pre-measurements, visual markings and/orembedded tracking.</p><p>In the second stage the virtual objects are rendered andcombined with the output of the first stage. This ofteninvolves the rendering of invisible objects that serve asoccluders, e.g., a model of a building can be aligned withthe actual building, rendered using only the alphachannel, thus serving to totally or partially occlude virtualobjects that are at greater depths.</p></li><li><p>Mixed Fantasy</p><p>2</p><p>In the third stage, the output of the second stage isused to determine what objects the user has selected (ifselections have been made). This phase can detect theintersection of either real beams (produced by lasers) orvirtual beams (produced by projecting rays from trackedselection devices into the rendered space).</p><p>2.2. Object selectionA number of Mixed Reality scenarios require that</p><p>users have the ability to select virtual objects. A variationof the selection process is registering a hit of an object(real or virtual) by a virtual projectile emanating from areal device, e.g., a futuristic-looking laser blaster.</p><p>A number of systems use tracking devices on theselection mechanisms for this purpose. The limitations ofthis approach are that it is susceptible to tracking errorsand is constrained by the maximum number of trackingelements in the system. As an alternative, a laser emittercan be combined with laser-dot detection in the imagescaptured by the HMD. Laser-detection is performed inreal-time by our system. In fact, for efficiency reasons, itis part of the chroma-keying process in our system. Theproblem with this scheme is that it requires that thetargeted object be seen by the HMD.</p><p>The limitations noted above have led us to implementboth schemes (tracking and vision-based). We have foundthat the story is the primary driver for emphasizing onetechnique over the other.</p><p>Figure 2. Tracked user and weapon</p><p>2.3. Retro-reflection chroma-keyAs noted above, we often employ chroma-keying</p><p>techniques to properly sort virtual objects with respect toreal-world objects, especially when live-capture video isinserted into portals such as windows or doors in a realscene. One approach is to key off of a monochromaticscreen. A blue or green screen has become a standard partof television and film studios. Unfortunately, lightingconditions greatly affect the color values recorded bydigital cameras and thus must be tightly controlled.Uniform lighting of the screen is necessary to avoid hot-spots. Such control is problematic for MR systems asthey often operate in environments with dynamiclighting. Furthermore, the lighting of a reflective orbacklit surface spills light on physical objects degrading aprecise matte.</p><p>Figure 3. Blue chroma key and SFX lights</p><p>As an alternative, we use a combination of aunidirectional retro-reflective screen and camera-mountedcold-cathode lights. Since unidirectional retro-reflectivescreen reflects the light only to its source, the lightemitted from the HMD is reflected back with negligiblechromatic distortion or degradation from the surroundingenvironment. As a result, consistent lighting keys areachieved, dramatically reducing the computationalcomplexity and at the same time allowing for dynamicvenue and environmental lighting..</p><p>Figure 4. HMD rigged with cathode lights</p><p>3. AudioIn designing the MR Sound Engine, we required a</p><p>system that could provide real-time spatialization of</p></li><li><p>Mixed Fantasy</p><p>3</p><p>sound and support acoustical simulation in real physicalspaces. This required a hybrid approach using techniquesfrom simulation, theme parks, video games and motionpictures. This led to the integration of novel techniques inboth audio engine and environmental display. Real-timesynthesis of sound was not a primary concern.</p><p>After studying many alternatives from APIs to DSPlibraries, we settled on EAX 2.0 in combination withOpenAL. We also chose to incorporate the ZoomFXfeature from Sensaura to simulate emitter size of soundsources.</p><p>There are three main properties that affect 3-D sound:directionality, emitter size and movement [2]. The firstcan be divided into omni-directional versus directionalsounds, the second into point sources with emitter sizezero versus voluminous sources, and the third intostationary versus moving sounds.</p><p>The MR Sound Engine combines these attributes toprovide the ability for real-time spatialization of bothmoving and stationary sound sources. For stationary 3-Dsounds, the source can be placed anywhere along the x, y,and z axes. Moving sounds must have a path and atimeline for that movement to take place. In our case, thepath is typically associated with a visible object, e.g, aperson walking, a seagull flying or a fish swimming.</p><p>Simulation of emitter size, sound orientation, androtating sound sources are supported. Emitter size canonly be altered for sounds that are spatialized. Soundsthat do not have spatialization are referred to as ambientsounds and play at a non-directional level in surroundsound speakers.</p><p>Standard features such as simultaneous playback ofmultiple sounds (potentially hundreds using softwarebuffers), ramping, looping, and sound scaling aresupported.</p><p>4. Special EffectsSpecial effects devices in a mixed reality allow for rich</p><p>enhancement of the real space in response to actions byeither real or virtual characters. When coupled with ourstory engine, this system creates the next level of mixedreality one in which the real world knows and isaffected by items in the virtual, and vice versa.</p><p>4.1. EffectsWe apply traditional special effects, the same that</p><p>might enrich a theme park attraction, theater show, ormuseum showcase. The following is a breakdown ofsome of the types of effects we use.</p><p>We lace our experiences with lighting to illuminatethe area of the experience and also as localized effects.Area lighting can signify the start of a scenario, changethe mood as it is dimmed, and reveal new parts of the seton demand. Effect lighting allows us to illuminatesmaller areas, or simply act as an interactive element.Heat lamps allow the user to feel the temperature of anexplosion and flickering LED lighting can simulate a fire.</p><p>In one of our simulations, users are instructed to shootout lights. A motion tracked shot allows us to flicker outthe light (over a door or window). In another scenario,users are navigating along the dirt paths of an island andthrough its underwater regions. Their speed of motiondirectly affects wind in their faces or at their backs if theyare moving in reverse. This effect is achieved simply byturning on rheostat controlled fans. We thought aboutblowing mist in their faces when they were underwater,but we quickly realized that this would adversely affectthe cameras of the HMD.</p><p>We also use compressed air for various special effects.Pneumatic actuators allow us to move pieces of sets, suchas flinging a door open to reveal a virtual character ortoppling a pile of rocks. Compressed air can be blown ina short burst to simulate an explosion or to scatter water,sawdust, or even smells. Smoke can be used to simulatean explosion, a rocket taking off, or can be combinedwith lighting to simulate an underwater scene. We uselow-cost commonly available smoke machines to greateffect.</p><p>Our system allows anything that can be controlledelectrically (or mechanically, with pneumatic actuators) tobe connected. We have connected fans and vibratingmotors to the system, and even driven shakers with thebass output of our audio system.</p><p>The use of all this equipment creates a rich real worldenvironment that transcends the virtual and assaults thesenses with the type of effects popularized by theme parkattractions. The difference is that these effects are trulyinteractive, able to be controlled both by elements in thevirtual environment as well as the real participant.</p><p>5. User interfacesHow one intuitively interacts with an MR experience</p><p>can have a profound effect on how compelling theexperience is. Using a gun device to disable evil aliencreatures is reasonable, but using the same device tocontrol a dolphins action is obscene and inappropriate,especially for children.</p><p>Inspired by the work of MITs tangible media group,we dissected a plush doll (a cuddly dolphin), placedwireless mouse parts and a 6-DOF sensor inside it(interchangeably using a Polhemus magnetic or anIntersense acoustical/inertial tracker), and stitched it backup. This allows players (children preferred) to aim thedolphin doll at a virtual dolphin and control the selecteddolphins actions. In our case, you press the right (left)flipper down to get the dolphin to move right (left); youpress the dolls nose to cause your dolphin to flip a balloff its nose (provided you already guided it to a ball); andpress its rear fin to get your virtual dolphin to send sonarsignals in the direction of others. That latter action(sending sonar) can be used to scare off sharks or tocommunicate with other dolphins. The communicationaspect of this might be used in informal scienceeducation, as well as in pure entertainment.</p></li><li><p>Mixed Fantasy</p><p>4</p><p>Recently we created a novel MR environment thatperforms like a CAVE but looks like a beach dressingcabana and greatly reduces square footage, costs and theinfrastructure of projection displays. We call this theMixed Simulation Demo Dome (MS DD), and it consistsof a unidirectional retro-reflective curtain that surroundsyou when you enter this very portable setting. Inside thedome is a round table with two chairs and two HMDs.Users sit on each side of the table, which is itself coveredwith retro-reflective material. .</p><p>Figure 5, MS DD concept drawing</p><p>When the users start the MR experience, each wearingan HMD, they see a 3d map embedded in the table. Themap is of an island with an underwater coral reefsurrounding it. An avatar, representing both users, is onthe island. The users have a Gods-eye view of theirposition and movement, and must cooperate to movetheir character around the island and through the coral reeffrom opposite points of view. Tilting the 3-DOF tabletopleft or right changes the avatars orientation; titling itforward or backward causes the avatar to move around theworld, basically in the direction of its current orientation.An underlying ant-based behavior model for the avatarprevents it from going up sharp slopes or through objects,such as coral reefs.</p><p>In providing both a Gods eye point-of-view on thetable and an immersive first person perspective around theusers, the level of interactive play dramatically increases,yet remains intuitive due to the tangible haptic feedback.The desired effect is achieved by creating a dual chromamask on the retro-reflective material that is produced byopposing green and blue lights on each of the usersHMDs. This first person scene is a shared one, althoughthe users are typically looking in different directions.Since your fellow user sees you as part of the MRexperience, your pointing and gazing are obvious. Toencourage such collaboration, we have added manyenticing virtual objects: trees, seagulls, a shark, coral fish,butterflies, a sting ray and a giant tortoise.</p><p>We have also added a little humor by changing thehead of your fellow user to a fish head whenever you twoare underwater. Since the fish head is made up of one-</p><p>sided polygons, you dont realize that you look as silly asyour colleague.</p><p>Figure 6. MS Isle augmented vir...</p></li></ul>

Recommended

View more >