8
computer methods and programs in biomedicine 96 ( 2 0 0 9 ) 226–233 journal homepage: www.intl.elsevierhealth.com/journals/cmpb CYCLOPS: A mobile robotic platform for testing and validating image processing and autonomous navigation algorithms in support of artificial vision prostheses Wolfgang Fink , Mark A. Tarbell Visual and Autonomous Exploration Systems Research Laboratory, Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA 91125, USA article info Article history: Received 22 February 2009 Received in revised form 10 June 2009 Accepted 26 June 2009 Keywords: Artificial vision prostheses Retinal implants Image processing Autonomous navigation Robotics Tele-commanding Self-commanding Cloud computing Worldwide accessibility abstract While artificial vision prostheses are quickly becoming a reality, actual testing time with visual prosthesis carriers is at a premium. Moreover, it is helpful to have a more realis- tic functional approximation of a blind subject. Instead of a normal subject with a healthy retina looking at a low-resolution (pixelated) image on a computer monitor or head-mounted display, a more realistic approximation is achieved by employing a subject-independent mobile robotic platform that uses a pixelated view as its sole visual input for navigation purposes. We introduce CYCLOPS: an AWD, remote controllable, mobile robotic platform that serves as a testbed for real-time image processing and autonomous navigation systems for the purpose of enhancing the visual experience afforded by visual prosthesis carriers. Complete with wireless Internet connectivity and a fully articulated digital camera with wireless video link, CYCLOPS supports both interactive tele-commanding via joystick, and autonomous self-commanding. Due to its onboard computing capabilities and extended battery life, CYCLOPS can perform complex and numerically intensive calculations, such as image processing and autonomous navigation algorithms, in addition to interfacing to addi- tional sensors. Its Internet connectivity renders CYCLOPS a worldwide accessible testbed for researchers in the field of artificial vision systems. CYCLOPS enables subject-independent evaluation and validation of image processing and autonomous navigation systems with respect to the utility and efficiency of supporting and enhancing visual prostheses, while potentially reducing to a necessary minimum the need for valuable testing time with actual visual prosthesis carriers. © 2009 Elsevier Ireland Ltd. All rights reserved. 1. Introduction While artificial vision prostheses are quickly becoming a real- ity, actual testing time with visual prosthesis carriers is at a premium. Moreover, it is helpful to have a realistic functional approximation of a blind subject. Commonly, the process of Corresponding author at: Visual and Autonomous Exploration Systems Research Laboratory, California Institute of Technology, 1200 East California Blvd, Mail Code 103-33, Pasadena, CA 91125, USA. Tel.: +1 626 395 4587. E-mail address: wfi[email protected] (W. Fink). URL: http://autonomy.caltech.edu (W. Fink). “experiencing” the visual perception of a blind person with a vision implant is emulated by having normal subjects with a healthy retina look at a low-resolution (pixelated) image on a computer monitor or head-mounted display. This is a rather inadequate emulation as a healthy retina with 10 9 photoreceptors can glean more information from a pixelated 0169-2607/$ – see front matter © 2009 Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.cmpb.2009.06.009

GRUPO 3 : s2.0-s0169260709002053-main (1)

Embed Size (px)

Citation preview

Page 1: GRUPO 3 :  s2.0-s0169260709002053-main (1)

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 6 ( 2 0 0 9 ) 226–233

journa l homepage: www. int l .e lsev ierhea l th .com/ journa ls /cmpb

CYCLOPS: A mobile robotic platform for testing andvalidating image processing and autonomous navigationalgorithms in support of artificial vision prostheses

Wolfgang Fink ∗, Mark A. TarbellVisual and Autonomous Exploration Systems Research Laboratory, Division of Physics, Mathematics, and Astronomy, California Instituteof Technology, Pasadena, CA 91125, USA

a r t i c l e i n f o

Article history:

Received 22 February 2009

Received in revised form

10 June 2009

Accepted 26 June 2009

Keywords:

Artificial vision prostheses

Retinal implants

Image processing

Autonomous navigation

Robotics

Tele-commanding

Self-commanding

Cloud computing

a b s t r a c t

While artificial vision prostheses are quickly becoming a reality, actual testing time with

visual prosthesis carriers is at a premium. Moreover, it is helpful to have a more realis-

tic functional approximation of a blind subject. Instead of a normal subject with a healthy

retina looking at a low-resolution (pixelated) image on a computer monitor or head-mounted

display, a more realistic approximation is achieved by employing a subject-independent

mobile robotic platform that uses a pixelated view as its sole visual input for navigation

purposes. We introduce CYCLOPS: an AWD, remote controllable, mobile robotic platform

that serves as a testbed for real-time image processing and autonomous navigation systems

for the purpose of enhancing the visual experience afforded by visual prosthesis carriers.

Complete with wireless Internet connectivity and a fully articulated digital camera with

wireless video link, CYCLOPS supports both interactive tele-commanding via joystick, and

autonomous self-commanding. Due to its onboard computing capabilities and extended

battery life, CYCLOPS can perform complex and numerically intensive calculations, such as

image processing and autonomous navigation algorithms, in addition to interfacing to addi-

tional sensors. Its Internet connectivity renders CYCLOPS a worldwide accessible testbed for

Worldwide accessibility researchers in the field of artificial vision systems. CYCLOPS enables subject-independent

evaluation and validation of image processing and autonomous navigation systems with

respect to the utility and efficiency of supporting and enhancing visual prostheses, while

potentially reducing to a necessary minimum the need for valuable testing time with actual

visual prosthesis carriers.

a healthy retina look at a low-resolution (pixelated) image

1. Introduction

While artificial vision prostheses are quickly becoming a real-

ity, actual testing time with visual prosthesis carriers is at apremium. Moreover, it is helpful to have a realistic functionalapproximation of a blind subject. Commonly, the process of

∗ Corresponding author at: Visual and Autonomous Exploration SystemsCalifornia Blvd, Mail Code 103-33, Pasadena, CA 91125, USA. Tel.: +1 62

E-mail address: [email protected] (W. Fink).URL: http://autonomy.caltech.edu (W. Fink).

0169-2607/$ – see front matter © 2009 Elsevier Ireland Ltd. All rights resdoi:10.1016/j.cmpb.2009.06.009

© 2009 Elsevier Ireland Ltd. All rights reserved.

“experiencing” the visual perception of a blind person witha vision implant is emulated by having normal subjects with

Research Laboratory, California Institute of Technology, 1200 East6 395 4587.

on a computer monitor or head-mounted display. This is arather inadequate emulation as a healthy retina with 109

photoreceptors can glean more information from a pixelated

erved.

Page 2: GRUPO 3 :  s2.0-s0169260709002053-main (1)

i n b

iapr

seSirvtaci

v[ttomprcb

aleesa(

• Gimbaled IP camera that is user-controllable.

Fmsi

c o m p u t e r m e t h o d s a n d p r o g r a m s

mage (e.g., edges, edge transitions, grayscale information,nd spatial frequencies) than an impaired retina in which thehotoreceptor layer is dysfunctional due to diseases such asetinitis pigmentosa and age-related macular degeneration.

A more realistic approximation is achieved by employing aubject-independent mobile robotic platform that uses a pix-lated view as its sole visual input for navigation purposes.uch a mobile robotic platform, described in the follow-

ng, represents not only a constantly available testbed foreal-time image processing systems, but even more so pro-ides a subject-independent means for testing and validatinghe efficiency and utility of real-time image processing andutonomous navigation algorithms for enhanced visual per-eption and independent mobility for the blind and visuallympaired using artificial vision prostheses.

The current state-of-the-art and near future artificialision implants, such as epi-retinal and sub-retinal implants1–9] (Fig. 1), provide only tens of stimulating electrodes,hereby allowing only for limited visual perception (pixela-ion). Usually these implants are driven by extraocular [8,9]r intraocular [10] high-resolution digital cameras that ulti-ately result in orders of magnitude smaller numbers of

ixels that are relayed to the respective implant in use. Hence,eal-time image processing and enhancement will afford aritical opportunity to improve on the limited vision affordedy these implants for the benefit of blind subjects.

Since tens of pixels/electrodes allow only for a very crudepproximation of the roughly 10,000 times higher optical reso-ution of the external camera image feed, the preservation andnhancement of contrast differences and transitions, such as

dges, become very important as opposed to picture detailsuch as object texture. Image processing systems (Fig. 2), suchs the Artificial Vision Simulator (AVS) [11,12], perform real-timei.e., 30 fps) image processing and enhancement of camera

ig. 1 – One instantiation of an artificial vision prosthesis. An inticroelectronic system to capture and process image data and tr

ystem. The implanted system decodes the data and stimulatesmpulses to generate a visual perception.

i o m e d i c i n e 9 6 ( 2 0 0 9 ) 226–233 227

image streams before they enter the visual prosthesis (Fig. 3).Moreover, such image processing systems must provide theflexibility of repeated application of image manipulation andprocessing modules in a user-defined order. Thus, currentand future artificial vision implant carriers can customizethe individual visual perception generated by their visualprostheses, by actively manipulating parameters of individ-ual image processing filters or altering the sequence of thesefilters.

2. Hardware description

For the purpose of creating a subject-independent mobiletestbed for image processing and autonomous navigationalgorithms for artificial vision prostheses, we have cre-ated CYCLOPS, an All-Wheel Drive (AWD) remote-controllablerobotic platform testbed with wireless Internet connectivityand a fully articulated digital camera with wireless video link(Fig. 4) [13]. For the basic robotic hardware we utilized a WiFi-BoT [14]. The WiFiBoT has a 4G Access Cube, which serves asthe central onboard processor, controlling four electric motors.

We custom-built CYCLOPS by equipping it with:

• Bi-level metal chassis and sensor trays.• General-purpose, high-performance mini Unix workstation

(i.e., Mac mini).• Rechargeable battery for the Unix workstation.• Two rechargeable batteries for the wheel motors.

• IEEE 1394 navigation camera with wide-angle field of view.• Two forward-looking IR proximity sensors.• Real-time voice synthesizer interface.• Wireless Internet capability.

raocular retinal prosthesis using an externalansmit the information to an implanted microelectronicvia an electrode array the retina with a pattern of electrical

Page 3: GRUPO 3 :  s2.0-s0169260709002053-main (1)

228 c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 6 ( 2 0 0 9 ) 226–233

Fig. 2 – Schematic diagram of a real-time image processing system for artificial vision prostheses (e.g., AVS [11,12]) that aredriven by extraocular [8,9] or intraocular [10] camera systems.

Page 4: GRUPO 3 :  s2.0-s0169260709002053-main (1)

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 6 ( 2 0 0 9 ) 226–233 229

F plov

3

Cscomi

3

Tvtpcteti

ig. 3 – Typical palette of image processing modules (e.g., emideo camera stream driving an artificial visual prosthesis.

. Testbed implementation

YCLOPS supports both interactive tele-commanding via joy-tick, and autonomous self-commanding. Due to its onboardomputing capabilities and battery life (i.e., 4.5 h for thenboard mini Unix workstation and 2 h for the electricotors), CYCLOPS can perform complex and numerically

ntensive calculations, such as:

Testing and validation of image processing systems, such asAVS [11,12], to further the experience of visual prosthesesusers.Testing of navigation algorithms and strategies to improvethe degree of unaided mobility.Testing of additional sensors (e.g., infrared) to further theutility of visual prostheses.

.1. Cloud computing

o enable testing of real-time image processing modules, indi-idually or in sequence, and to enable the transmission ofhe resulting remote control navigation sequences, the mobilelatform must establish a connection between itself and theomputer hosting the control software. A standard, direct one-

o-one connection could be established, but this is fragile, asither system may not be reachable or known at the momenthe connection attempt is made. Instead, a “cloud comput-ng” concept is utilized [15,16], wherein the mobile platform

yed by AVS [11,12]) that can be applied in real time to a

connects to one or more known “Com Servers” (Fig. 5) . TheCom Servers are known, established Internet entities to whichboth the mobile platform and the controlling computer sys-tem connect, acting as a go-between and buffer. In this way,neither end need know the actual IP address of the other, yetan Internet connection is still established between them, withauto-reconnect in case of connection dropouts.

The cloud computing approach affords a great deal of archi-tectural flexibility; many operational modes are supported,from fully synchronous to fully autonomous. In the case ofthe joystick operation of CYCLOPS, the mobile platform is insynchronous connection to the Com Server. However, this isnot strictly required for other modes of operation. Minimally,constant connectivity is not required, as a temporary connec-tion would be sufficient to upload from the mobile platformvideo and sensor data resulting from prior command sets,and to download to the mobile platform further operationalcommands (e.g., navigation commands), which would be exe-cuted autonomously. A fully “offline” autonomous operationalcapability, i.e., independent real-time onboard processing, iscurrently in development for CYCLOPS.

3.2. Interconnectivity

An Internet TCP/IP connection is established between the CPUaboard the mobile platform testbed (via its wireless LAN) andthe computer hosting the image processing and the front-end control software via a Com Server. For the purpose of

Page 5: GRUPO 3 :  s2.0-s0169260709002053-main (1)

230 c o m p u t e r m e t h o d s a n d p r o g r a m s i n

Fig. 4 – CYCLOPS, an AWD remote-controllable roboticplatform testbed with wireless Internet connectivity and afully articulated (user-controllable) digital camera withwireless video link, as well as an IEEE 1394 navigationcamera with wide-angle field of view. It is equipped with ageneral-purpose mini Unix workstation. CYCLOPS ispowered by rechargeable batteries. Furthermore, CYCLOPSsupports a sensor bay for additional sensors (e.g., infrared).

Fig. 5 – Principle of “cloud computing” [15,16]. A mobile platformInternet in lieu of direct end-to-end connection with the controladdress of the other.

b i o m e d i c i n e 9 6 ( 2 0 0 9 ) 226–233

reliability, this connection is instantiated by the creation ofa temporary AF INET stream socket at the transport layer, uti-lizing a three-way handshake. Once this full-duplex transportlayer is accomplished, the mobile platform is able to trans-mit video frames and other sensor data in a packetized andcompressed format over the layer. The mobile platform alsotransmits its housekeeping data (battery level, hardware sen-sor data, etc.) and awaits sensor and drive commands fromthe front-end software that are triggered by the results of thereal-time image processing of the video frames (e.g., via AVS).

3.3. Video and sensor data processing

The video and sensor data are treated similarly, however, thevideo data are first preprocessed into a suitable data format.This is accomplished by packetizing the video. Each non-interlaced stream frame of video data is compressed andinserted into the payload portion of a header packet, taggingthe data as to type, length, timestamp, and sequence. This hasthe advantage over time-division multiplexing of allowing forreal-time synchronization to occur on the receiving end withminimal reconstruction processing. The network connectionis thus used as a virtual N-receiver broadcast channel, eachchannel being a Q-ary data channel, providing the same mech-anism for video, sensor, and hardware housekeeping data (e.g.,battery charge levels).

3.4. Navigation commanding

Navigation and camera command transmittal to the mobileplatform testbed is accomplished as follows: The platform

connects to one or more known “Com Servers” on thesystem. In this way, neither end need know the actual IP

Page 6: GRUPO 3 :  s2.0-s0169260709002053-main (1)

i n b

umramsTiifip

mpscefPmcmmarpat

Fit

c o m p u t e r m e t h o d s a n d p r o g r a m s

tilizes two independent pairs of CPU-controlled electricotors, each using a 3-gear reduction module, providing a 4:1

eduction ratio for the increase of applied torque to the wheelsnd reduced power consumption [14]. Each pair of wheelotors is individually addressable via a command packet that

pecifies motor speed, direction, duration, and speed limiter.ransmittal of the command packet to the mobile platforms analogous to the preceding, as the existing transport layers utilized in a full-duplex mode, albeit strictly sequencedor simplicity of processing and performing the commandsn order. Commanding the onboard camera follows a similarrocedure.

It should be pointed out that the current input com-and set (e.g., set of navigation commands) for the mobile

latform differs in form and function from the video andensor data, which are received. Input commands requireomparatively few bytes for expression, thus a simpler, morexpedient architecture is utilized. The input command setor the mobile platform forms a Strictly Ordered Commandipeline (SOCP) set. Such sets form conditional pipeline branchaps, with sequencing precluding the need for individual

ommand prioritization. For example, the mobile platformay be instructed to perform a certain overall movement, e.g.,ove to the other side of the room. This is translated intoSOCP set resembling a binary tree; it comprises individual

obotic movements (turns, motor commands, etc.) to accom-lish the overall goal of moving to the other side of the room. Ifny individual command in the SOCP set cannot be executed,hat particular SOCP set is invalidated at that point, causing

ig. 6 – CYCLOPS Commanding Interface, controlling the AWD renterface displays the current status of CYCLOPS including batterhe high-resolution gimbaled camera view.

i o m e d i c i n e 9 6 ( 2 0 0 9 ) 226–233 231

dependent command sets in the pipeline to be invalidated defacto, thus returning the mobile platform to a “known good”state.

3.5. User interactivity

For the purpose of commanding CYCLOPS interactively (laterautonomously), the front-end software has an integratedvideo panel (Fig. 6) for displaying the transmitted video framesfrom the mobile platform’s on-board camera (Fig. 4); it is alsooutfitted with a USB-based joystick device. The user, con-trolling the joystick, is in the loop for the purpose of thedevelopment of automated software and algorithms to con-trol CYCLOPS. Once developed, such automated software canbe “plugged in” in lieu of the user for automatic control, withmanual user override always available. The user’s movementsof the joystick are translated into camera orientation andwheel rotation commands, and are sent to the mobile plat-form. As the mobile platform begins to move, it also sends backvideo, sensor, and housekeeping data, which are displayed onthe front-end. With this feedback information, a user (or auto-mated control software for self-commanding) is able to controlCYCLOPS interactively (or automatically) from anywhere inthe world, in near real-time.

CYCLOPS uses only the pixelated camera images to move

about an environment (e.g., room/corridor with obstacles),thus more realistically emulating the visual perception of ablind subject. It processes and enhances the pixelated imageryto result in new motion and navigation commands, such as

mote robotic platform testbed in near real-time. They charge levels, heading, velocity, obstacle proximity, and

Page 7: GRUPO 3 :  s2.0-s0169260709002053-main (1)

232 c o m p u t e r m e t h o d s a n d p r o g r a m s i n

Fig. 7 – Navigation camera view of CYCLOPS at differentvisual resolutions (i.e., degrees of pixelation), mimickingthe view afforded by artificial vision prostheses. Eachcolumn from top to bottom: 64 × 64, 32 × 32, 16 × 16, 8 × 8.Left column shows navigating a corridor while avoiding an

obstacle (i.e., a chair). Right column shows the following ofa high-contrast guideline on the floor of a corridor.

navigating a corridor while avoiding obstacles, and guidelinefollowing (Fig. 7).

4. Conclusion

CYCLOPS enables subject-independent testing, evaluation,and validation of image processing and autonomous nav-

b i o m e d i c i n e 9 6 ( 2 0 0 9 ) 226–233

igation systems (Figs. 2 and 3) with respect to the utilityand efficiency of supporting and enhancing visual prosthe-ses, while potentially reducing to a necessary minimum theneed for valuable testing time with actual visual prosthesiscarriers.

It is difficult to predict exactly what a blind subject witha camera-driven visual prosthesis may be able to perceive.Therefore, it is advantageous to offer a wide variety of imageprocessing modules and the capability and flexibility forrepeated application of these modules in a user-defined order.AVS [11,12], in particular, comprises numerous efficient imageprocessing modules, such as pixelation, contrast/brightnessenhancement, grayscale equalization for luminance controlunder severe contrast/brightness conditions, grayscale levelsfor reduction of data volume transmitted to the visual pros-thesis, blur algorithms, and edge detection (e.g., [17,18]). Withthe development of CYCLOPS it is now possible to empiricallydetermine, in the absence of a blind subject, which particularsequences of image processing modules may work best for theblind subject in real world scenarios (e.g., Fig. 7).

One of the goals is to get CYCLOPS to “behave” simi-lar to a blind subject (especially motionwise) by developing,implementing, testing, and fine-tuning/customizing onboardalgorithms for image processing and analysis as well as auto-navigation. Once a certain degree of similarity in a behavioralpattern is achieved, such as navigating safely through a cor-ridor with obstacles or guideline following (e.g., Fig. 7), theunderlying image processing and analysis algorithms as wellas the sequences of image processing modules that enabledthis successful behavior may be used to establish a practicalinitial configuration for blind subjects when implemented intheir respective visual prosthesis systems. Furthermore, test-ing with CYCLOPS may contribute to improving the design ofenvironments that provide suitable access for the blind (e.g.,rooms, corridors, entrances) by choosing those that CYCLOPSperformed best in.

Its Internet connectivity renders CYCLOPS a worldwideaccessible testbed for researchers in the field of artificialvision systems and machine vision. We have provided acommanding interface that allows the research communityto easily interface their respective image processing andautonomous navigation software packages to CYCLOPS bymerely using high-level commands, such as “turn right by25 degrees” or “move forward one meter”. Additionally, wehave provided numerous interfaces for onboard cameras (Eth-ernet, IEEE 1394, USB). The direction and orientation of thegimbaled camera can be user-controlled, allowing for theemulation of head/eye-motion of a blind subject wearing anartificial vision prosthesis. The onboard real-time voice syn-thesizer can be used as a means to communicate audio cues(e.g., “Door 2 meters ahead.”) resulting from autonomousnavigation and obstacle recognition/avoidance systems (e.g.,[19]).

Researchers can interface their software packages either byremotely issuing high-level commands over the Internet, or byintegrating and running their software packages locally on the

onboard Unix workstation, thereby bypassing the Internet forcommand transmittal. Regardless, researchers will be able tomonitor remotely the actions and camera views of CYCLOPSvia its commanding interface (Fig. 6).
Page 8: GRUPO 3 :  s2.0-s0169260709002053-main (1)

i n b

(iue

C

Fob

A

TtN

r

c o m p u t e r m e t h o d s a n d p r o g r a m s

CYCLOPS is directly and immediately applicable to anyartificial) vision-providing system that is based on an imag-ng modality (e.g., video cameras, infrared sensors, sound,ltrasound, microwave, radar, etc.) as the first step in the gen-ration of visual perception.

onflict of interest statement

ink and Tarbell may have proprietary interest in the technol-gy presented here as a provisional patent has been filed onehalf of the California Institute of Technology.

cknowledgment

he work described in this publication was carried out athe California Institute of Technology under support of theational Science Foundation Grant EEC-0310723.

e f e r e n c e s

[1] W. Liu, M.S. Humayun, Retinal prosthesis, in: IEEEInternational Solid-State Circuits Conference Digest ofTechnical Papers, 2004, pp. 218–219.

[2] E. Zrenner, K.-D. Miliczek, V.P. Gabel, H.G. Graf, E. Guenther,H. Haemmerle, B. Hoefflinger, K. Kohler, W. Nisch, M.Schubert, A. Stett, S. Weiss, The development of subretinalmicrophotodiodes for replacement of degeneratedphotoreceptors, Ophthalmic Res. 29 (1997) 269–328.

[3] J.F. Rizzo, J.L. Wyatt, Prospects for a visual prosthesis,Neuroscientist 3 (1997) 251–262.

[4] E. Zrenner, Will retinal implants restore vision? Science 295(2002) 1022–1025.

[5] M.S. Humayun, J. Weiland, G. Fujii, R.J. Greenberg, R.Williamson, J. Little, B. Mech, V. Cimmarusti, G. van Boemel,G. Dagnelie, E. de Juan Jr., Visual perception in a blindsubject with a chronic microelectronic retinal prosthesis,Vision Res. 43 (2003) 2573–2581.

i o m e d i c i n e 9 6 ( 2 0 0 9 ) 226–233 233

[6] S.C. DeMarco, The architecture, design, and electromagneticand thermal modeling of a retinal prosthesis to benefit thevisually impaired, PhD Thesis, North Carolina StateUniversity, 2001.

[7] P.R. Singh, W. Liu, M. Sivaprakasam, M.S. Humayun, J.D.Weiland, A matched biphasic microstimulator for animplantable retinal prosthetic device, in: Proceedings of theIEEE International Symposium on Circuits and Systems, vol.4, 2004.

[8] J.D. Weiland, W. Fink, M. Humayun, W. Liu, D.C. Rodger, Y.C.Tai, M. Tarbell, Progress towards a high-resolution retinalprosthesis, Conf. Proc. IEEE Eng. Med. Biol. Soc. 7 (2005)7373–7375.

[9] J.D. Weiland, W. Fink, M.S. Humayun, W. Liu, W. Li, M.Sivaprakasam, Y.C. Tai, M.A. Tarbell, System design of a highresolution retinal prosthesis, Conf. Proc. IEEE IEDM (2008),doi:10.1109/IEDM.2008.4796682.

[10] C.-Q. Zhou, X.-Y. Chai, K.-J. Wu, C. Tao, Q. Ren, In vivoevaluation of implantable micro-camera for visualprosthesis, Invest. Ophthalmol. Vis. Sci. 48 (2007) 668(E-Abstract).

[11] W. Fink, M. Tarbell, Artificial vision simulator (AVS) forenhancing and optimizing visual perception of retinalimplant carriers, Invest. Ophthalmol. Vis. Sci. 46 (2005) 1145(E-Abstract).

[12] W. Liu, W. Fink, M. Tarbell, M. Sivaprakasam, Imageprocessing and interface for retinal visual prostheses, in:ISCAS 2005 Conference Proceedings, vol. 3, 2005, pp.2927–2930.

[13] M.A. Tarbell, W. Fink, CYCLOPS: A mobile robotic platformfor testing and validating image processing algorithms insupport of visual prostheses, Invest. Ophthalmol. Vis. Sci. 50(2009) 4218 (E-Abstract).

[14] Robosoft, http://www.robosoft.fr/eng/.[15] R. Chellappa, Cloud computing—emerging paradigm for

computing, INFORMS, 1997.[16] B. Hayes, Cloud computing, Commun. ACM 51 (2008).[17] J.C. Russ, The Image Processing Handbook, CRC Press, 2002.

[18] H.R. Myler, A.R. Weeks, The Pocket Handbook of Image

Processing Algorithms in C, Prentice Hall PTR, 1993.[19] W. Fink, M. Tarbell, J. Weiland, M. Humayun, DORA: digital

object recognition audio-assistant for the visually impaired,Invest. Ophthalmol. Vis. Sci. 45 (2004) 4201 (E-Abstract).