6
2013 13th Inteational Conference on Control, Automation and Systems (ICCAS 2013) Oct. 20-23, 2013 in Kimdaejung Convention Center, Gwanu, Korea Vision Based Guide-dog Robot System for Visually Impaired in Urban System Xiangxin Kau l , Yuanlang Wei 2 and Mincheal Lee 3 l Oepartment of Mechanical Engineering, Pusan National University, Busan, South Korea (Tel : +82-51-510-3081; E-mail:xxkou1l5@gmail.com) 2 0epartment of Mechanical Engineering, Pusan National University, Busan, South Korea (Tel : +82-51-510-3081; E-mail:[email protected]) 2 0epartment of Mechanical Engineering, Pusan National University, Busan, South Korea (Tel : +82-51-510-3081; E-mail:[email protected]) Abstract: This paper presents a research about a vision of guide-�og robot syste for vi uall impaired eople via the camera in smart phone. In this system, camera can replace bhnd people or visually I palred eople s eyes to analyze the traffic situation. The recognition of traffic lights is pr posed through the adapt ve bo stmg (A . d boost) algorithm and template matching algorithm combined approach. Firstly, Adaboosmethod . IS . appled for tramm� the Pedestrian light signal detector and secondly by applying template matching algonthm t dlstmgu sh the . traffic hghts between green and red. And vanishing point is applied for zebra-crossing detection, and histogram IS apphed to defend the misrecognition and separate real zebra-crossing om staircase. Keywords: visually impaired, mobile robot, adaptive boosting (Adaboost), template matching, vanishing point 1. INTRODUCTION Until 2012, there are an estimated 314 million visually impaired exist worldwide based on the statistics om WHO(World Health Organization)[I]. The visually impaired people must do some social activities or go outside with aid tools. If they decide to use a white cane or guide dog [2], the tools cannot make the visually impaired safety and smart enough to go to the destination, while walking in urban environment. And before applying the tools, the visually impaired must take a lot of time to train for using the white cane or making the guide dog realize how to lead its host to go somewhere, such as coffee shop, iend's house, and restaurant. Guide dogs are as an assistance dog for visually impaired, so the guide dog must be trained for a long time to make them realize masters' regular hours and lifestyle [3]. Also guide dogs are regard as a best partner in blind and visually impaired people's lives and they are trained to lead blind and visually impaired people to avoid obstacles and let masters know some special emergency. However, training a guide dog to help blind and visually impaired people need 18 months, which will totally cost $25,000�$30,000. Also a guide dog can be a guidance within 8 to 10 years. So it has a problem of short service life. According to the statistics about 90% of the world's visually impaired people live in developing countries [I]. Like China has 18% of the world visually impaired people, but in China guide dogs are not enough to let each visually impaired people have own one. Considering these, it's necessary to research a low cost guide-dog robot which can be instead of real guide dogs and also the guide-dog robot have basic nction to help the visually impaired people to reach the target. This research aims to research a guide-dog system for the visually impaired people. The guide-dog robot system has multiple functions to assist the visually impaired go to the target place safety in urban system. 978-89-93215-05-295560/13/$15 rCROS 130 By this system, the visually impaired people can avoid the active crowed and static obstacle via ultrasonic sensor. Also traffic lights and zebra crossing are considered in this system. Though the test of these nctions, the flexibility of the guide-dog robot can be examined. However, normal people can use smart phone or GPS to determine the path when he or she walks into a complex environment. But the visually impaired people need other sensing ways to go to the destination. The guide-dog robot system can assist the masters to accomplish missions' safety and accurately in outdoor urban system. In previous research, several methods [4] to [7] have been proposed to assistant devices development for visually impaired. For example, om 1977 to 1985, Japanese research group developed an integrative guide-dog robot named "MELOOG" [4]. Through this system, the guide dog robot has the ability to detect and avoid the obstacles via multi sensors. Also it can communicate with master through speech output and it through the landmark map to navigate the route. But in real urban environment, the master should fast give the response within short period, but the "MELOOG" needs some time to reaction. And 2006, Smith-Kettlewell Eye Research Institute researches a technical innovation about aid the visually impaired to find the destination by cell phone [5]. From this technical innovation, this method which only can be used indoor environment is recognized, because at first the color stick is pasted to let the cell phone can be detected which visually impaired has own. In 2007, Frenchmen researched outdoor and indoor way finding assistance for visually impaired [6]. Through the body mounted vision system, the system via the images as the trip progresses along a memorized path to compute the instantaneous accurate localization and heading estimates of the person for visually impaired. But this equipment may cause users uncomfortable and it also increases users' load during

Vision Based Guide-Dog Robot System

Embed Size (px)

DESCRIPTION

Paper about dog robot using computer vision

Citation preview

  • 2013 13th International Conference on Control, Automation and Systems (ICCAS 2013) Oct. 20-23, 2013 in Kimdaejung Convention Center, Gwangju, Korea

    Vision Based Guide-dog Robot System for

    Visually Impaired in Urban System

    Xiangxin Kaul, Yuanlang Wei2 and Mincheal Lee3

    lOepartment of Mechanical Engineering, Pusan National University, Busan, South Korea (Tel : +82-51-510-3081; E-mail:[email protected])

    20epartment of Mechanical Engineering, Pusan National University, Busan, South Korea (Tel : +82-51-510-3081; E-mail:[email protected])

    20epartment of Mechanical Engineering, Pusan National University, Busan, South Korea (Tel : +82-51-510-3081; E-mail:[email protected])

    Abstract: This paper presents a research about a vision of guide-og robot syste for viuall impaired eople via the camera in smart phone. In this system, camera can replace bhnd people or visually Ipalred eople s eyes to analyze the traffic situation. The recognition of traffic lights is prposed through the adaptve bostmg (A.dboost) algorithm and template matching algorithm combined approach. Firstly, Adaboos method .IS .appled for tramm the Pedestrian light signal detector and secondly by applying template matching algonthm t dlstmgush the. traffic hghts between green and red. And vanishing point is applied for zebra-crossing detection, and histogram IS apphed to defend the misrecognition and separate real zebra-crossing from staircase.

    Keywords: visually impaired, mobile robot, adaptive boosting (Adaboost), template matching, vanishing point

    1. INTRODUCTION

    Until 2012, there are an estimated 314 million visually impaired exist worldwide based on the statistics from WHO(World Health Organization)[I]. The visually impaired people must do some social activities or go outside with aid tools. If they decide to use a white cane or guide dog [2], the tools cannot make the visually impaired safety and smart enough to go to the destination, while walking in urban environment. And before applying the tools, the visually impaired must take a lot of time to train for using the white cane or making the guide dog realize how to lead its host to go somewhere, such as coffee shop, friend's house, and restaurant. Guide dogs are as an assistance dog for visually impaired, so the guide dog must be trained for a long time to make them realize masters' regular hours and lifestyle [3]. Also guide dogs are regard as a best partner in blind and visually impaired people's lives and they are trained to lead blind and visually impaired people to avoid obstacles and let masters know some special emergency. However, training a guide dog to help blind and visually impaired people need 18 months, which will totally cost $25,000$30,000. Also a guide dog can be a guidance within 8 to 10 years. So it has a problem of short service life. According to the statistics about 90% of the world's visually impaired people live in developing countries [I]. Like China has 18% of the world visually impaired people, but in China guide dogs are not enough to let each visually impaired people have own one. Considering these, it's necessary to research a low cost guide-dog robot which can be instead of real guide dogs and also the guide-dog robot have basic function to help the visually impaired people to reach the target.

    This research aims to research a guide-dog system for the visually impaired people. The guide-dog robot system has multiple functions to assist the visually impaired go to the target place safety in urban system.

    978-89-93215-05-295560/13/$15 @)rCROS 130

    By this system, the visually impaired people can avoid the active crowed and static obstacle via ultrasonic sensor. Also traffic lights and zebra crossing are considered in this system. Though the test of these functions, the flexibility of the guide-dog robot can be examined. However, normal people can use smart phone or GPS to determine the path when he or she walks into a complex environment. But the visually impaired people need other sensing ways to go to the destination. The guide-dog robot system can assist the masters to accomplish missions' safety and accurately in outdoor urban system.

    In previous research, several methods [4] to [7] have been proposed to assistant devices development for visually impaired. For example, from 1977 to 1985, Japanese research group developed an integrative guide-dog robot named "MELOOG" [4]. Through this system, the guide dog robot has the ability to detect and avoid the obstacles via multi sensors. Also it can communicate with master through speech output and it through the landmark map to navigate the route. But in real urban environment, the master should fast give the response within short period, but the "MELOOG" needs some time to reaction. And in 2006, Smith-Kettlewell Eye Research Institute researches a technical innovation about aid the visually impaired to find the destination by cell phone [5]. From this technical innovation, this method which only can be used indoor environment is recognized, because at first the color stick is pasted to let the cell phone can be detected which visually impaired has own. In 2007, Frenchmen researched outdoor and indoor way finding assistance for visually impaired [6]. Through the body mounted vision system, the system via the images as the trip progresses along a memorized path to compute the instantaneous accurate localization and heading estimates of the person for visually impaired. But this equipment may cause users uncomfortable and it also increases users' load during

  • walking time. And in 2000, something which called Guide Cane had been researched which help the visually impaired or blind users to navigate safely and quickly among obstacles and other hazards [7]. This research almost has perfect function, but this research only can use in no zebra crossing and traffic lights environment.

    1. Static target can be seen 50 meters away

    . 2. Dynamic target can be seen 825 meters away 3. Visual angle top of 50 to 70 degrees, and the bottom is 30 to 60 degrees, lett and right eyes are 100

    / to 125 degrees 4. Using brightness of color to distinguish traffic light

    1. 5-megapixel i5ight camera "'rY 2. Advanced optics with IR filter

    3. Autofocus and white balance Face detection 4. Can use for traffic light and zebra crossing detection

    Fig. 1 Camera equipment

    In this paper, considering about the traffic environment, a vision system based mobile robot is designed, which includes a vision system for traffic situation detection. The vision system consists of a smart phone and its connected equipment as shown in Fig. I. And a smart phone can give the guide-dog the ability of traffic information detection. Therefore_vision based guide-dog robot system can suggest for visually impaired to recognize the traffic lights and zebra crossing accurately and make the visually impaired walk across the road safety.

    This paper describes the system design, interactive method and testing the vision based guide-dog robot system. Section II includes the system design and vision algorithm. The implementation of proposed vision system is given in section III. And section IV presents the results of the experiment in urban system. At last the paper concludes in section V with a summary of contributions and the future work.

    2. PROPOSED METHOD

    2.1 Pedestrian light signal recognition In real urban system, there is an example of a scene

    include including the traffic lights. First Adaboost learning algorithm is applied to find the traffic lights area, next by applying video sequence image binary to separate green color and red color. At last template matching method is applied to recognize the color of the traffic lights. The scheme of proposed method is shown in Fig. 2.

    Adaboost (adaptive boosting) is a machine learning algorithm, formulated by Yoav Freund and Robert Schapire [8]. Feature values are calculated by HOG algorithm [9]. Therefore, firstly positive images and negative images are applied to gray. Each image is seen as 3-dimensional image. Gamma proofreading method is applied to let the color image transfer to normalized image. Using this method the noise can be reduced and

    13 1

    decrease the effect of light and shadow. Second, calculate the gradient of each pixel as function (1), this method can find the information of capture profile.

    Gx(x,y) = l(x,y) - l(x + l,y) Gy(x,y) = l(x,y) - l(x,y + 1)

    { tan -1 (Gy(X,Y)) Sex ) = Gx(x,y)' , y

    tan-1 (Gy(X,Y)) + IT , GxCx,y)

    if tan-1 (Gy(X,Y)) Gx(x,y)

    if tan-1 (Gy(X,Y)) Gx(x,y)

    (1)

    ;:::0

  • 2013 13th International Conference on Control, Automation and Systems (ICCAS 2013) Oct. 20-23, 2013 in Kimdaejung Convention Center, Gwangju, Korea

    Training

    Re-weight

    Adaboost algorithm

    Training

    Re-weight

    Fig. 3 The scheme of Adaboost algorithm

    Also Adaboost algorithm often combined with other algorithms to improve performance of the experiment. So through template matching algorithm is utilized to recognize the information of the pedestrian light signal. Before template matching algorithm, transfer pedestrian light signal image to binary, before that red color and green color with other colors are separated by RGB Color Space. Therefore red and green color ranges can be selected through RGB Color Space can be obtained as shown in Fig. 4 [10]. Because sunshine is an important factor in real-time computer VISIOn experiment, and it is hard to know how strong the lightness of the color will affect, so image binary is utilized to solve this problem.

    Fig. 4 RGB Color Space

    After image binary, template matching algorithm is utilized to recognize the Pedestrian light signal's information. And the pixel error value will be computed between the template image and pedestrian light signal image through function (3).

    The template image is defined as T, and (x, y) is representing the pixel on x axis and y axis of the template image. Each frame image is defined as S and also (m, n) as same that set it is the pixel on x axis and y axis of the search image. Therefore, between pedestrian light signal image and template image are compared by the pixel error. If matching part is matched by template matching the error value is as small as possible.

    132

    Function (I) transferred to normalized function, and then the normalized function is shown as function (4),

    R = Lm=lLn=l S(m,n)xLX=lLY=l T(x,y)

    JLm=lLn=lIS(m,n) 12 x JLX=lLY=lIT(x,y) 12 (4)

    Also the green light and red light after obtaining binary image are computed at the same time, and then two error values which calculate each light template image with video sequence image are derived. In these two result values, the one which is closer to I is the result to be recognized.

    2.2 Zebra crossing recognition Another important traffic signal should be paid

    attention for pedestrian in real urban system is zebra crossing as shown in Fig. 5. To improve the zebra crossing recognition performance, Hough Transform is utilized twice to ensure vanishing points after image binary zebra crossing. Histogram is an auxiliary method to make sure the zebra crossing to be true.

    First, all horizontal lines in video sequence images are searched through line detection of Hough Transform by using camera of iPhone. And second, use another Hough Transform to detect vertical lines in video sequence image [11] [12]. The intersection points of these lines as characteristic points are vanishing points.

    .. . . " . Fig.5 Zebra crossing in real urban system

    At last histogram is applied which is an auxiliary algorithm to ensure the zebra is crossing. We suppose that zebra crossing satisfied the condition of vanishing points. Also the bar chart of histogram must look like

  • zebra crossing. [f it can satisfy above conditions, the zebra crossing recognition can be ensured.

    3. IMPLEMENTATION OF PROPOSED SYSTEM

    The schematic diagram of proposed guide-dog robot system can be seen in Fig. 6. [t is built up on a mobile robot; the basic robot hardware for movement is HBE-RoboCar, designed by HANBACK ELECTRON[CS. Co., Ltd. [t has the function which uses Bluetooth to communicate with the laptop, the robot motion and trajectory also can be recorded synchronously. The "smart rope" consists of a hall-sensor joystick, an AVR board, a Bluetooth serial adaptor (Parani SDlOO, Sena Technologies, Inc.) and two ultrasonic sensors. Hall sensor's data and ultrasonic sensors' data are transferred from sensors to the laptop by Bluetooth. The ultrasonic sensors chosen for this research are SRF -05 from Devantech Company from England; it is capable to detect the obstacles between 3cm to 3m. For three sensors' signal processing before Bluetooth transmission, an AVR board is used, which contains an Atmega128 embedded AVR MCU. About the joystick and ultrasonic sensors will be presented in another paper. In this paper, the robot vision part consists of an IPhone 4 connected with an external battery. It is locked into an upholder to prevent from the vibration during the movement. And vision system will communicate with laptop through Wi-Fi communication. The software will use Camera Wi-Fi LiveStream v[.2. [in this paper. The smart phone can stream video to the laptop using Wi-Fi communication in real-time for signal processing. The inputs data is obtained by Wi-Fi communication, and utilize this traffic information to control the encoders of the guide-dog robot's wheels.

    Fig. 6 Constitution of guide-dog robot's hardware

    4. EXPERIMENT RESULTS

    After setup the experimental hardware, the guide dog robot is taken go outside in real urban system for one experiment. The visually impaired followed about SOm length. The guide dog robot would avoid two obstacles by ultrasonic sensor. And master followed guide-dog

    133

    robot to go across zebra crossing which is detected by iPhone4 camera. Also before walking across the zebra crossing guide-dog robot must detect the traffic lights to ensure its safety and confirm the performance. And the experiment path can be seen in Fig. 8.

    Fig. 7 A path of experiment

    4.1 Pedestrian light signal recognition This experiment is taken on a laptop which CPU is i7

    2.9G Hz, GPU is NVidia Quadro NVS 5400M. [n real traffic environment, the situation of pedestrian light signal has 7 kinds as shown in Fig.8. The pedestrian light signal is replaced by circle signal in Fig. 8. And the pedestrian light signal flashing time is 1 second. Through compute flashing time and recognition time the camera's sampling time can be set as 25 frame/sec.

    Situation 1 Situation 2 Situation 3 Situation 4 .0. Situation 5 0.0 Situation 6 .0. Situation 7

    Fig. 8. Some situations of pedestrian light signal

    Fig. 9 shows one part of samples of negative images and positive images. To improve the accuracy, totally iPhone's camera is applied to take [300 pictures at different crossroads. And these pictures were taken by day and night, close range and long range. These pedestrian signal pictures are utilized at the same size as 16 * 16 for training.

  • Fig. 9 One part of positive and negative example images

    Through Adaboost method, the error rate is obtained as shown in Fig. 10. False acceptance rate (FAR) represents the probability that a given system will accept an incorrect input as a positive image. And false rejection rate (FRR) is a measure of the probability that a system will incorrectly reject an input as a negative image.

    1 :s...

    iii :

    :

    : 1 10"

    False Acceptance Rate

    Fig. 1 O. After training obtain the error rate of pedestrian light signal detection

    10

    From Fig. 11, the matching part is obtained through the result value which is close to I. And other values which are much more different from 1 cannot be matched.

    134

    T

    ,. :II .. . -""''' ... V_oIrt. ....

    Fig. 11 Template Matching algorithm result (Green light)

    Therefore through combining the Adaboost method and template matching method, the performance could be performed as shown in Table I. And the compare with this approach which is applied in this paper between only template matching algorithm also can be figured in Table 1.

    Tablel. Compare Template Matching with proposed method in this paper

    Red Light Green Light

    Time Accuracy Time Accuracy

    Template 0.23s 89.7% 0.22s 90.2%

    Matching The

    method in 0.28s 92.4% 0.26 94.3% this paper

    The experiments took through 6 times in a crossing road and the pedestrian light signal and zebra crossing detection are examined in each four corners.

    4.2 Zebra crossing recognition

    Fig. 12 Zebra crossing detection From Fig. 12, the vanishing points can be figured out

    by using Hough Transform twice. But other objects are similar as zebra crossing would be taking as error recognition.

  • 3000 .

    2500 -

    2000 -

    1500 -

    1000 -

    500 -

    IIIIIII1 O 50 100 150 200 250

    Fig. 13 Histogram of zebra crossing

    From Fig.13 through histogram we can ensure the zebra crossing for visually impaired to go across safety. Also from this figure, we can see that after histogram, the image result only approximately have 2 kinds of colors. So the zebra crossing could be ensured and it will be safety for visually impaired to walk alone.

    5. CONCLUSION

    This paper presented a development of a computer vision based guide-dog robot system to detect traffic situation through iPhone4's camera, visually impaired can utilize this system to have a self-walking in real urban system. Through smart phone's camera, traffic lights can be detected by combining Adaboost algorithm with template matching algorithm. And vanishing point is applied to detect the zebra crossing and histogram assists it to determine the zebra crossing. Through this experiment, these vision algorithms are tested and verified in real-time outdoor experiment.

    The guide dog robot is an integrated mobile robot system with low cost equipment which has several functions to assist visually impaired for walking alone. Also these functions can detect traffic situation and ensure visually impaired safety when they walk alone in urban environment.

    In the future, more functions will be researched and added into this guide-dog robot, such as stairs detection. Also at that time guide-dog robot structure will be re-designed to make it suitable for more environments.

    Acknowledgment

    This research was supported by the MOTIE(The Ministry of Trade, Industry and Energy), Korea, under the Human Resources Development Program for Special Environment Navigation/Localization National Robotics Research Center support program supervised by the NIPA(National IT Industry Promotion Agency)." (HI502-13-IOOl)

    135

    REFERENCES

    [1] World Health Organization. Action plan for the prevention of avoidable blindness and visual impairment, 2009-2013. http://www. who.intlblindness/ ACTION ]LAN _ W HA62-1-English.pdf.

    [2] Y. Wang and K. 1. Kuchenbecker, "HALO: Haptic Alerts for Low-hanging Obstacles in white cane navigation," IEEE Haptics Symposium, 527-532, March 2012

    [3] http://en.wikipedia.org/wiki/Guide _dog [4] Coughlan, J., Manduchi, R., Shen, H. "Cell

    Phone-based Way finding for the Visually Impaired." In: 1st International Workshop on Mobile Vision, in conjunction with ECCV 2006, Graz, Austria (May 2006)

    [5] J. Coughlan, R. Manduchi, H. Shen, "Cell phone-based wayfinding for the visually impaired", 1st Int. Workshop on Mobile Vision, Graz, Austria, May 2006.

    [6] Sylvie Treuill et, Eric Royer, Thierry Chateau, Michel Dhome, leanMarc Lavest (2007) "Body mounted vision system for visually impaired outdoor and indoor wayfinding assistance", Conference and Workshop on Assisstive Technologies for People with Vision and Hearing Impairments.

    [7] Ulrich, Iwan, and Johann Borenstein. "The GuideCane-applying mobile robot technologies to assist the visually impaired." Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on 31.2 (2001): 131-136.

    [8] http://en.wikipedia.org/wiki/ AdaBoost [9] Dalal, Navneet, and Bill Triggs. "Histograms of

    oriented gradients for human detection." Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. Vol. I. IEEE, 2005.

    [10] http://en.wikipedia.org/wiki/HSL_and_HSV [II] Se, Stephen. "Zebra-crossing detection for the

    partially sighted." Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on. Vol. 2. IEEE, 2000.

    [12] Hodlmoser, M., Branislav Micusik, and Martin Kampel. "Camera auto-calibration using pedestrians and zebra-crossings." Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on. IEEE, 2011.