8
Low-cost wearable low-vision aid using a handmade retinal light-scanning microdisplay Ryland C. Bryant Eric J. Seibel Cameron M. Lee Konrad E. Schroder Abstract — The Wearable Low Vision Aid (WLVA) is a portable system that uses machine vision to track potential walking hazards for the visually impaired. A scanning fiber display couples a laser diode to a vibrating optical fiber that projects a virtual image onto the retina to display warning icons that the visually impaired can recognize. Initial low-vision subject testing has given promising results for this low-cost assistive device. Keywords — Low vision, legally blind, microdisplay, machine vision, wearable computer. 1 Introduction The visually impaired have great difficulty navigating and avoiding obstacles as they walk even when using a cane or seeing eye dog and especially under low light levels. Both Massof 1 and Peli 2,3 have demonstrated the possibility of combining video cameras and head-mounted displays (HMDs) to create portable low-vision-enhancement devices. Massof developed a video see-through system capa- ble of correcting for a number of vision disorders. A video see-through system projects an image from a head-mounted video camera to a HMD that occludes the surrounding view. Peli used brighter light sources for an optical see-through system that may provide lower-contrast overlaid images while not occluding the surrounding view. Creating a portable low-cost assistive device to aid the visually impaired is the goal of the Wearable Low-Vision Aid (WLVA) project. The prototype WLVA (Fig. 1) uses machine vision to identify walking hazards 4,5 and a see-through head- mounted scanning fiber display 6 to present icons indicating the location of potential hazards. The scanning fiber dis- play 7 projects laser light through a vibrating optical fiber in order to project an image onto the retina. In this paper we describe the engineering of a low-cost portable WLVA that incorporates infrared (IR) illumination and efficient machine-vision algorithms to identify potential walking haz- ards and a scanning fiber display to present bright icons to warn the user. 2 Device development The Wearable Low Vision Aid (WLVA) consists of three major components; a head-mounted display, backpack-mounted equipment, and software. The HMD (Fig. 2) incorporates the handmade scanning fiber display and optics mounted in a tube on one side of a spectacle frame, and a video camera with IR light-emitting diodes (LEDs) mounted on the other side. The backpack-mounted equipment consists of a laptop computer, an embedded processor, and hardware to drive the scanning fiber display. Revised version of a paper presented at the 2004 SID International Symposium held May 25–27, 2004, in Seattle, Washington. The authors are with the Human Interface Technology Laboratory, 215 Fluke Hall, UW Mailbox 352142, Seattle, WA 98195-2142; telephone 206/543-5075, e-mail: [email protected]. © Copyright 2004 Society for Information Display 1071-0922/04/1204-0397$1.00 FIGURE 1 — The Wearable Low Vision Aid system including laptop computer, fiber scanner control hardware, head-mounted scanning fiber display, camera, and IR LEDs. FIGURE 2 — Camera with ring of IR LEDs (left) and head-mounted display (right). Journal of the SID 12/4, 2004 397

Low-cost wearable low-vision aid using a handmade retinal light-scanning microdisplay€¦ ·  · 2015-07-29Low-cost wearable low-vision aid using a handmade retinal light-scanning

Embed Size (px)

Citation preview

Low-cost wearable low-vision aid using a handmade retinal light-scanningmicrodisplay

Ryland C. BryantEric J. SeibelCameron M. LeeKonrad E. Schroder

Abstract — The Wearable Low Vision Aid (WLVA) is a portable system that uses machine vision totrack potential walking hazards for the visually impaired. A scanning fiber display couples a laserdiode to a vibrating optical fiber that projects a virtual image onto the retina to display warning iconsthat the visually impaired can recognize. Initial low-vision subject testing has given promising resultsfor this low-cost assistive device.

Keywords — Low vision, legally blind, microdisplay, machine vision, wearable computer.

1 IntroductionThe visually impaired have great difficulty navigating andavoiding obstacles as they walk even when using a cane orseeing eye dog and especially under low light levels. BothMassof1 and Peli2,3 have demonstrated the possibility ofcombining video cameras and head-mounted displays(HMDs) to create portable low-vision-enhancementdevices. Massof developed a video see-through system capa-ble of correcting for a number of vision disorders. A videosee-through system projects an image from a head-mountedvideo camera to a HMD that occludes the surrounding view.Peli used brighter light sources for an optical see-throughsystem that may provide lower-contrast overlaid imageswhile not occluding the surrounding view.

Creating a portable low-cost assistive device to aid thevisually impaired is the goal of the Wearable Low-Vision Aid(WLVA) project. The prototype WLVA (Fig. 1) uses machinevision to identify walking hazards4,5 and a see-through head-mounted scanning fiber display6 to present icons indicatingthe location of potential hazards. The scanning fiber dis-play7 projects laser light through a vibrating optical fiber inorder to project an image onto the retina. In this paper wedescribe the engineering of a low-cost portable WLVA that

incorporates infrared (IR) illumination and efficientmachine-vision algorithms to identify potential walking haz-ards and a scanning fiber display to present bright icons towarn the user.

2 Device developmentThe Wearable Low Vision Aid (WLVA) consists of three majorcomponents; a head-mounted display, backpack-mountedequipment, and software. The HMD (Fig. 2) incorporatesthe handmade scanning fiber display and optics mounted ina tube on one side of a spectacle frame, and a video camerawith IR light-emitting diodes (LEDs) mounted on the otherside. The backpack-mounted equipment consists of a laptopcomputer, an embedded processor, and hardware to drivethe scanning fiber display.

Revised version of a paper presented at the 2004 SID International Symposium held May 25–27, 2004, in Seattle, Washington.

The authors are with the Human Interface Technology Laboratory, 215 Fluke Hall, UW Mailbox 352142, Seattle, WA 98195-2142; telephone 206/543-5075,e-mail: [email protected].

© Copyright 2004 Society for Information Display 1071-0922/04/1204-0397$1.00

FIGURE 1 — The Wearable Low Vision Aid system including laptopcomputer, fiber scanner control hardware, head-mounted scanning fiberdisplay, camera, and IR LEDs.

FIGURE 2 — Camera with ring of IR LEDs (left) and head-mounteddisplay (right).

Journal of the SID 12/4, 2004 397

The software includes a machine-vision program runon the laptop computer to identify potential collisions, anembedded program to control the scanning fiber display,and a graphical user interface (GUI) to facilitate settingparameters for the embedded processor and generating eas-ily recognizable icons.

2.1 Spectacle-frame componentsThe main components (described below) of the scanningfiber display, video camera with IR LEDs, and brightnesscontrol knob were not designed for low weight, creating a470-g spectacle frame including attachments in Fig. 2.

2.1.1 Scanning fiber displayThe scanning fiber display consists of two bimorph piezo-electric actuators (American Piezo Ceramics), the opticalfiber, and lenses. The small core optical fiber is attached tothe end of the fast-scan piezo, which is coupled orthogonallyto the slow-scan piezo (Fig. 3). The slow-scan piezo is cut tovibrate at its first mode of resonant vibration. The fast-scanpiezo and slow-scan piezo are driven independently at3 kHz and 60 Hz, respectively, to create the raster scan pat-tern shown in Fig. 3. The other end of the optical fiber isconnected to a laser diode, which modulates the light inten-sity synchronously with the fiber position in order to createmultiple low-resolution icons.

The current display creates a scanned object plane ofapproximately 4 × 3 mm with 100 × 39 separated pixels,displayed at 30 frames per second. Figure 3 shows an exam-ple icon projected by the display. The dual bimorph scan-ning fiber display is constructed by hand using componentsthat cost less than one dollar.6 The bimorph piezos are cutto an appropriate length and width using a rotary cutter andglued together. The optical fiber (ThorLabs FS-FN-3224) ischemically etched to reduce the fiber diameter and improve

scanning dynamics.8 The optical fiber is glued to the tip ofthe fast-scan piezo. The vibrating end of the fiber istrimmed with a carbon dioxide laser (Synrad Corp.) in orderto achieve maximum deflection at the first mode of vibratoryresonance for the optical-fiber cantilever.

The scanning fiber display creates an audible hum atthe resonant frequency of the fast-scan piezo (3 kHz).Mounting a second piezo next to the fast scan piezo andvibrating it at the same frequency, but out of phase, resultsin an interference pattern that significantly attenuates thehum. Preliminary tests have recorded a 14 dB decrease innoise by using this method.

A 3.8 cm (1.5 in.) Delrin tube supports the fiber scan-ner and allows adjustment of the lenses. The scanned laserimage is reflected onto the user’s retina by a small mirror (orbeam splitter for optical see-through mode) mounted to theend of the tube. By allowing the WLVA user to see theirsurroundings at all times, situational awareness is main-tained. The small mirror inset or see-through beam-splitterdesign allows the displayed icon to augment the user’svision. The high brightness of the laser diode or future LEDsource make this display suitable for use in outdoor condi-tions with these various display modes.9

2.1.2 Video camera and IR LEDsA color video camera with a ring of 24 IR LEDs is mountedon the right side of the HMD. An optical filter is mountedin front of the camera lens to block visible light. Customcircuitry synchronizes illumination of the IR LEDs withalternating video frames. The video is captured and proc-essed in real time by the laptop computer to identify andlocate potential hazards up to 12 ft. away. The head-mounted video camera is angled down slightly to capturehazards from ground level up to head level.

2.1.3 Brightness controlA knob is located behind the camera that allows the user toreduce the brightness of the display by adjusting power tothe light source for darker indoor or nighttime use. Cur-rently, the display brightness is set for upcoming hazardavoidance testing in bright indoor lighting conditionsaccording to our approved human subjects testing protocolinvolving low-vision volunteers.

2.2 Backpack-mounted equipmentThe backpack-mounted equipment includes a laptop com-puter and an aluminum case containing control hardwareand batteries. The total weight of the backpack equipmentis 4.5 kg.

FIGURE 3 — Diagram of scanning fiber, scanned pattern, andphotograph of projected test icon.

398 Bryant et al. / Low-cost wearable low-vision aid

2.2.1 Laptop computerThe laptop computer is a Dell Latitude with a 1.8-GHzprocessor. A video capture card (Dazzle Digital Video Crea-tor 80) captures video at a rate of 30 frames per second. Thelaptop communicates with the embedded processorthrough the serial port.

2.2.2 Control hardwareCustom hardware was developed to control the scanningfiber display so the laptop computer could be dedicated totime-critical machine-vision hazard-detection algorithms.The hardware used to control the scanning fiber displayconsists of an embedded processor, a first-in first-out(FIFO) frame buffer, and other discrete components. AnAtmel ATMEGA128 generates the frequencies to drive thepiezos, the synchronization signals to keep the framealigned, and handles communication with the laptop andthe FIFO frame buffer.

The scanning fiber display generates binary (uniformbrightness) images but could easily be adapted to displaygray-scale images. Static icons (a single frame projectedrepeatedly) or dynamic icons (a sequence of several frames)can be displayed. The FIFO frame buffer (IDT 7208) facili-tates storage and retransmission of the pixel data (icons ortext). The pixel clock frequency (laser modulation fre-quency) is divided down by the ATMEGA 128 programma-ble counters to generate the horizontal-scan frequency andthe vertical-scan frequency. This method keeps all of theclock signals in phase and eliminates the phase locked loopused in the original bench-top prototype.6 The ATMEGA128 programmable counters also generate horizontal andvertical synchronization pulses that are used to adjust theframe alignment.

A ThorLabs (LPS-3224-635) 3-mW pigtailed red laserdiode (633-nm wavelength) is the light source. For safety

reasons the laser diode is driven at only 5% of its rated powerduring development creating an intensity of 230 µW/cm2. Areading performance study in the see-through mode10

showed blue (458 nm) light may be easier for low-visionsubjects to see in the augmented display mode. The lightsource could be changed to a blue laser diode at additionalcost. LED brightness has increased rapidly in recent years,making pigtailed LEDs the preferred light source in thefuture.

2.3 Software

The WLVA software includes machine-vision programs5

running on the laptop computer and a scanning fiber displaycontrol program running on the embedded processor with aGUI that facilitates initial setup of the embedded processor.

FIGURE 4 — An IR illuminated frame showing a brightly lit nearbyobstacle and ambient light.

FIGURE 5 — Non-IR-Illuminated frame showing bright ambient lightsources from fluorescent lights and sun shining through windows.

FIGURE 6 — Differential Frame with ambient lighting significantlyreduced.

Journal of the SID 12/4, 2004 399

2.3.1 Machine-vision programThe machine-vision program processes the video capturedfrom the video camera. An alternating IR-flash video-cap-ture technique is used to discern between objects in theforeground and objects in the background. Two consecutiveframes are captured; one illuminated by IR LEDs (Fig. 4)and the other not illuminated by IR LEDs (Fig. 5). Thenon-illuminated frame is subtracted from the illuminatedframe to generate a differential measurement of luminance(Fig. 6). This differential frame represents the light returningto the camera from the IR LEDs. Objects in the foregroundreflect more light than objects in the background; therefore,close objects appear brighter in the differential image.

Bright areas are identified and then analyzed by theirintensity, size, and position. The characteristics of thesebright areas are tracked over several frames and comparedto a collision template. This collision template is generatedfrom previously recorded collisions and can be tailored fordifferent setups or users. The program generates an indexdescribing how close the data being analyzed matches thecollision template. This index is then used to rate the likeli-hood of a collision. If an object is determined to be on acollision course the program sends a message to the displaycontroller to display an icon in the general area of the objectof interest.

This method is a simple yet efficient way of recognizingpertinent obstacles. In an ideal situation this software wouldhave a low false negative response to collision hazards. Thiswould allow it to be used in conjunction with other image-recognition techniques to critically evaluate hazards withoutthe overhead of processing entire video sequences.

2.3.2 Embedded programThe embedded software facilitates loading icons into theFIFO frame buffer, selects which icon to display, and con-trols the parameters for adjusting the scanning fiber display.

During development or user customization, icons are trans-ferred from the laptop computer to the embedded proces-sor through a serial link, and then to the FIFO frame bufferfor projection. The embedded software adjusts the framealignment by changing the timing of data output by theFIFO frame buffer. After the customization is complete,several sets of icons are stored in the non-volatile programmemory of the embedded processor. Several different haz-ard icons can be customized to the user’s preference andenvironment. The icons can be displayed in various loca-tions in the visual field to warn the user of the location andproximity of potential walking hazards.

2.3.3 Graphical-user interfaceA Matlab GUI is used to communicate with the embeddedprocessor during initial setup (Fig. 7). Icons can be devel-oped quickly on the laptop computer and saved as bitmapfiles. The setup GUI can import the bitmap icons, group theicons, and transmit the icons to the embedded processor.The setup GUI permits changing of the embedded proces-sors parameters without the need to recompile and down-load the embedded software. The icons can be designed toflash or increase in size to indicate increasing probability ofcollision.

FIGURE 7 — A graphical user interface facilitates icon development andparameter adjustment.

FIGURE 8 — Icon groups (numbered from 1 to 6 from the top) of eighticons each.

400 Bryant et al. / Low-cost wearable low-vision aid

3 Test methodsAll testing was conducted according to approved protocolsfrom the University of Washington Human Subjects divi-sion. Four legally blind subjects were brought in to test abench-mounted scanning fiber display. Two subjects wereunable to use the display, possibly due to reduced sensitivityto red light. Two subjects were able to distinguish betweendifferent icons projected in the display.

In the initial testing session, a number of icons wereprojected through a benchtop display to allow easy adjust-ment of the optics for each test subject. Subject 1 was showneach of the icons shown in Fig. 8 except for group 3. Thesubject was asked to identify the location of the highlightedrectangle in groups 1 and 2. Several letters of the alphabet(groups 4–6) were displayed, and the subject was asked toidentify which letter was shown. Subject 2 was asked toidentify letters first, followed by the highlighted rectanglelocation (in groups 1–3, Fig. 8).

After 1–2 weeks, the follow-up testing session re-quired each subject to use the HMD and identify the group3 icons. Each of these icons represents where in the videoframe that an obstacle was detected. The open box or “allclear” icon was used to indicate that no obstacles were cur-rently detected. The “all clear” icon was chosen in favor of ablank icon, in order to give the subjects something to main-tain focus on, permitting rapid recognition of the hazardicons. After the subjects were comfortable with quicklyidentifying each of the icons, the subjects put on the backpackequipment (Fig. 9) in order to test the complete system. Asthe subjects walked down an indoor hallway, several obsta-cles were encountered along the hallway that were placed intheir assumed path such as a chair, a garbage pail, and awooden board. Subjects were instructed to stop walking andindicate when seeing a hazard icon.

4 Test resultsTable 1 summarizes the results from initial testing of bothsubjects, each having retinal scarring as their vision disabil-ity. The table lists the number of correctly identified iconsvs. the number of icons projected for each group shown inTable 1. Both subjects returned for a follow up session thatincluded testing the complete system.

4.1 Subject 1 initial testThe subject was shown icons from Group 1 of Fig. 8. Thesubject easily identified the icons without being instructedwhat the icons looked like. The subjects recognition of theinverse icons from group 2 seemed to be faster. The fasterrecognition may be due to the subject having more practiceidentifying the icons rather than the icons being easier tosee. The group 3 icons were not developed until after theinitial test of subject 1. The subject identified all of the let-ters in group 4 correctly, but had more difficulty identifyingthe inverse lettered icons. 12 of 16 letters in groups 5 and 6were identified correctly.

4.2 Subject 2 initial testThe icons from group 4 were shown to Subject 2 first. Thesubject incorrectly identified the inverse B icon as an R andsaid that the icon was very difficult to see because it satu-rated her vision. All of the icons from group 5 were identi-fied correctly. 3 of 5 group 1 icons were incorrectlyidentified as letters. The subject may have been expectingto see letters displayed since 12 of the first 13 icons wereletters, leading her to misidentify the box icons from group1. The subject was instructed to identify the location of theilluminated box in a 3 × 3 matrix of boxes, similar to a tic-tac-toe board with one square completely filled. Sub-sequently, the subject identified all 11 icons displayed fromgroup 1 correctly. The subject independently identified thechange to the 3 × 2 matrix of boxes in group 3, and correctlyidentified all 7 icons shown.

4.3 Subject 1 follow-up testThe subject was fitted with the HMD and asked to identifythe group 3 icons. After making minor adjustments to the

FIGURE 9 — A low-vision subject testing the Wearable Low Vision Aidas the author places obstacles in her path. (Photograph printed withsubject’s permission.)

TABLE 1 — Summary of initial testing results from twosubjects. Number of correctly identified icons/numberof icons displayed.

Subject 1 Subject 2

Group 1 6/6 2/5, 11/11

Group 2 8/8 NA

Group 3 NA 7/7

Group 4 8/8 4/5

Group 5 6/8 8/8

Group 6 6/8 8/8

Journal of the SID 12/4, 2004 401

optics, the subject was able to quickly and accurately iden-tify all icons from group 3. The subject walked down a well-lit hallway while objects were placed in the subject’s pathapproximately 10–15 ft. ahead of her. The subject wasinstructed to stop when identifying a projected hazard iconor when encountering an obstacle. The subject was able todetect and stop before colliding with a chair placed in herpath. Each trial, the subject received a hazard indicationand was able to stop about an arm’s length from the chair.The subject was instructed to walk around the chair andsuccessfully navigated around the chair several times.Therefore, under indoor lighting conditions, the hazardswere detected by the machine-vision system early enough togive warning to the user so she could stop at arms distancefrom the potential hazard. This separation distance wasideal because the user could easily investigate the hazard bytheir sense of touch, then could turn their head to find aclear pathway around the hazard.

The garbage pail was not detected on any of the trials,most likely due to the fact that the head-mounted camerawas too high to detect it. A 1.9 cm × 22.9 cm × 1.5 m (0.75in. × 9 in. × 5 ft.) wooden board was placed against the wallat approximately a 45° angle. The subject was unable todetect the board when the narrow 1.9 cm (0.75 in.) edgefaced the subject. The subject was able to detect and avoidthe board after it was turned on edge, with the 22.9 cm(9 in.) width facing the subject. The board was held at waist,mid-chest, and head level with both the narrow and wideedges facing the subject. Each trial, the subject was able todetect and avoid the board.

The subject was permitted to explore the area freelyunder supervision and found a new method of using theWLVA. The subject walked up to the wall and stopped a fewfeet from it. The subject walked along the wall and was ableto locate corners and open doors when the display changedto an “all clear” icon.

4.4 Subject 2 follow up test

The subject was fitted with the HMD and asked to identifygroup 3 icons. The subject had a little difficulty initially, butperformed well after a slight optics adjustment. The subjectcould not easily distinguish between the center–center iconand either the upper or lower center icons. The subject sug-gested using a more distinctive icon for the center–centericon, so it was changed to the letter “O” from group 6. Afterthe change, the subject correctly identified all 13 icons dis-played.

The testing procedure was similar to Subject 1. Sub-ject 2 was unable to detect a garbage pail or the chair placedin their path. A board held at mid-chest and eye level wasdetected on both the broad side and edge. A board held atwaist level was not detectable. The subject was successful atlocating open doors and corners while walking along thewall.

5 Discussion

This WLVA prototype is the first display system to be builtusing a scanning fiber display, and these initial test resultsare the first published for such a wearable assistive device.During indoor testing, two pre-selected subjects were ableto avoid hazards above waist height using the WLVA. The IRflash illumination provides enhanced functionality at night,efficient hazard detection capability, and early warning for abroad range of hazards. The scanning fiber display’s highbrightness, lightweight, see-through design, and low costmake it an ideal choice for this application. The scanningfiber with laser diode source makes it possible to augment aperson’s vision without obscuring their natural visual ability.Although the icons have a low pixel count, the user can pre-select the icons for specific warnings making recognitionrapid. This enhancement of the visual system is preferred tousing an auditory warning that could obscure natural audi-tory cues that are critical to the visually impaired.

The WLVA system is designed for low cost and versa-tility. The monocular near-to-eye display can be added to apair of spectacle lenses with fiber-scanning display compo-nents costing less than one dollar. Therefore, the aug-mented display can be considered disposable, eliminatingthe need for repair. The laptop computer is commercialoff-the-shelf technology that can be used in fixed locationslike at school, work, and home when not being used formachine-vision hazard detection.

It is not currently feasible to run both collision detec-tion and optical character recognition (OCR) simultane-ously on the type of notebook used in the prototype. Thesystem could easily be switched between navigation mode(current implementation) and text-examination mode, usingthe system as a traditional OCR device. The most computa-tionally expensive components of the collision detectionsoftware, however, are well-suited to a hardware implemen-tation, which would free the computer for other tasks. TheOCR capability required for the text-to-speech function haslikewise been implemented in hardware.

If the input data are sufficiently preprocessed bythe individual hardware devices before being presented tothe notebook computer, many such devices could be usedsimultaneously. One imagines various input devices such asa GPS receiver, a handheld text input device, a handheldrange-finding device, and several head-, body-, and extrem-ity-mounted or handheld collision detection devices (suchas cameras); and various output devices such as fiber-scanned optical displays, audio displays, and tactile displays;all operating as thin peripherals to the central controldevice, which could be a notebook computer or could evenbe a lightweight device such as a PDA.

Initial tests of the ongoing study with low vision per-sons have proven moderately successful. Subject 1 was quitesuccessful at detecting and avoiding many obstacles. Thesystem had difficulty detecting small obstacles near theground, possibly due to the limited illumination range of theIR LEDs. Moving the camera to a hip mount or handheld

402 Bryant et al. / Low-cost wearable low-vision aid

tool and increasing the number of IR LEDs may improvethe system’s detection range and field. A shoe-mountedcamera will be able to cover the most difficult machine-visionchallenge of uneven terrain.11 The vision-recognition sys-tem operates poorly in bright outdoor conditions due toconsiderable light saturating the camera image. A betteroptical notch IR filter may reduce the stray IR light enteringthe camera, allowing for a better differential illuminationmeasurement and improving the obstacle detection capability.

Unlike Subject 1 who could easily detect chairs in herpath and discovered a new method for finding open door-ways, Subject 2 had difficulty. Subject 2 tends to walk withher head turned to the right slightly, in order to take advan-tage of the better vision in her left eye. The machine-visionprogram assumes that the camera is moving in the samedirection as the user. The slight head turn causes the camerato move at an angle to potential obstacles, possibly confus-ing the machine vision software. Adjusting the camera anglefor this user is expected to improve performance of themachine-vision portion of the WLVA.

Many low-vision people have reduced sensitivity tored light, and earlier work has shown that red is the leastpreferred display color for a low-vision aid.12 Recently, suf-ficient optical power was generated using LumiLeds green(LXHL-LM3C, 530 nm) and blue (LXHL-LR3C, 455 nm)LEDs coupled to a multimode optical fiber. The LXHLLED power consumption is about five times that of the redlaser diode, but the increased selection of monochromaticlight sources allows customizing the WLVA to the user’s spe-cific color sensitivity. LEDs also provide longer lifetime andincreased durability at about one-tenth the cost of laserdiodes. Future work will include coupling a blue LED to asmall core multimode optical fiber, a redesign of the opticsto create a small exit pupil in order to improve reading per-formance by both the extended depth of field12 and theincreased visibility of overlaid blue text in the augmentedreality mode.10

Another study by the Human Interface TechnologyLaboratory has developed a small (3 mm diameter × 30 mmlong) spiral scanning fiber system capable of capturing a250 × 250 pixel image.13 Recently, we have produced scan-ners half this size with twice the resolution at close to videoframe rates, which can be mounted inconspicuously on a

spectacle frame. Moving the camera to a hip-mounted orhandheld unit would make the HMD nearly indistinguish-able from a normal pair of glasses. The spiral scanners usemuch smaller piezoelectric tube actuators and do notgenerate an audible hum. Therefore, replacing the dualbimorph scanner with the new spiral scanner would signifi-cantly reduce the weight, size, and noise while maintaininglow cost and increasing display resolution by over 50 times.

Table 2 shows the costs of the equipment used to buildthe prototype WLVA. Replacing the laser diode with high-brightness LEDs or using a less-expensive laptop computerwould reduce costs further. If the user did not need a fullyfunctioning laptop computer it could be replaced by dedi-cated hardware; dramatically reducing cost and size.

AcknowledgmentsThe Author would like to thank the following people:Robert Burstein for electrical engineering oversight andnoise cancellation work. Mark Fauver for micromachiningthe optical fibers into resonant optical scanners. Janet Cross-man-Bosworth for optics design. Jeff Magula for spectacle-frame design and manufacture. Kris Lawrence for initialusability design and testing. Ross Melville for LED lightsource design. This research was funded by NSF ResearchGrant to Eric Seibel #9978888, Research to Aid Personswith Disabilities, with Research Experience for Under-graduates (REU) supplements allowing undergraduate stu-dents to contribute significantly. For additional information,go to www.hitl.washington.edu/projects/wlva.

References1 R W Massof, “Electro-optical head-mounted low vision enhance-

ment,” Practical Optometry 9:6, 214–220 (1988).2 E Peli, “Head-mounted display as a low vision aid,” Proc 2nd Ann Intl

Conf on Virtual Reality and Persons with Disabilities, 115–122 (1999).3 F Vargas Martin and E Peli, “Augmented view for tunnel vision:

Device testing by patients in real environments,” SID SymposiumDigest 32, 602–605 (2001).

4 J Andersen and E Seibel, “Real-time hazard detection via machinevision for wearable low vision aids,” 5th Intl Symp on Wearable Com-puters ISWC, 182–183 (2001).

5 C M Lee, K E Schroder, and E J Seibel, “Efficient image segmentationof walking hazards using IR illumination in wearable low vision aids,”6th Intl Symp on Wearable Computers ISWC, 127–128 (2002).

6 E J Seibel, S S Frank, M Fauver, J Crossman-Bosworth, J R Senour,and R Burstein, “Optical fiber scanning as a microdisplay source fora low-cost, wearable low vision aid,” SID Symposium Digest 33,338–341 (2002).

7 S S Frank E J Seibel, “Design of a vibrating fiber display for a wearablelow vision aid,” ASME International Mechanical Engineering Con-gress and Exposition BED-Vol. 51, 189–190 (2001).

8 M Fauver, J Crossman-Bosworth, and E J Seibel, “Microfabrication offiber optic scanners. Optical Scanning II,” Proc SPIE 4773, 102–110(2002).

9 S-K V Lin, E J Seibel, and T A III Furness, “Testing visual searchperformance using retinal light scanning as a future wearable lowvision aid,” Intl J Human-Computer Interaction (two-volume specialissue on Mediated Reality), 15(2), 247–265 (2003).

10 E J Seibel, C-C Gau, S McQuaide, S J Weghorst, J P Kelly, and T AIII Furness, “Augmented retinal light scanning display for low vision:effect of text color and background on reading performance,” Opt SocAmerica’s Topical Meeting on Vision Science and Its Applications,Technical Digest, 51–54 (2001).

TABLE 2 — System component costs.

Laptop computer $2000

Video capture device $60

Pigtailed laser diode $425

Video camera with IR LEDs $150

Embedded processor $20

Discrete chips and PCB $150

Optical fiber $10

Piezos $2

Lenses and head mount $300

TOTAL $2947

Journal of the SID 12/4, 2004 403

11 P Fitzpatrick and C C Kemp, “Shoes as a platform for vision,” 7th IntlSymp. on Wearable Computers ISWC, 231–234 (2003).

12 C P Kleweno, E J Seibel, E S Viirre, J P Kelly, and T A III Furness,“The virtual retinal display as a low vision computer interface: A pilotstudy,” J Rehabilitation Research and Development 38(4), 431–442(2001).

13 Q Y J Smithwick, J Vagners, P G Reinhall, and E J Seibel, “Modelingand control of the resonant fiber scanner for laser scanning display oracquisition,” SID Symposium Digest 34, 1455–1457 (2003).

Ryland C. Bryant received his B.S. and M.S. degreesin mechanical engineering from the University ofWashington in 2001 and 2003, respectively. Hejoined the Human Interface Technology Labora-tory at the University of Washington as a graduatestudent and now works as a research engineerspecializing in scanning fiber displays, scanningfiber endoscopes, embedded computing, andmicromachining.

Eric J. Seibel received his B.S. degree from CornellUniversity, Ithaca, NY in 1983, his M.S. degree inmechanical engineering from the University ofCalifornia, Berkeley, CA in 1984, and his Ph.D.degree in bioengineering in 1996 after working inthe ophthalmic medical device industry for4 years. Currently he is a Research Assistant Pro-fessor, Mechanical Engineering, and advisor forseveral graduate and undergraduate students asfaculty at the University of Washington, Seattle,

WA, in the fields of mechanical and biomedical engineering. His re-search projects encompass the development of novel techniques of op-tical scanning for image acquisition and display.

Cameron M. Lee received his B.S. in electricalengineering from the University of Washington,Seattle, in 2003. He is currently a research engi-neer at the Human Interface Technology Labora-tory at the University of Washington, specializingin hardware and software development.

Konrad E. Schroder received his B.S. degree withdistinction in mathematics and his B.A. in Latinfrom the University of Washington, Seattle, in1995. Currently he works as an information-tech-nology specialist at the Human Interface Technol-ogy Laboratory at the University of Washington.

404 Bryant et al. / Low-cost wearable low-vision aid