CA080044

Embed Size (px)

Citation preview

  • 8/2/2019 CA080044

    1/4

    Drivers Eye State Identification Based on Robust Iris Pair Localization*Tauseef Ali, **Khalil Ullah*Myongji univ.,deptt. Electronics and comn. Engg. (TEL: 010-5814-1333;E-mail: [email protected] )

    **Myongji univ.,deptt. Electronics and comn. Engg. (TEL: 010-8691-8402;E-mail:[email protected])

    Abstract In this paper, we propose a novel and robust approach to determine eye state. The method is based on

    robust iris pair localization. After iris pair is detected from image, it is analyzed by comparing its openness with

    the normal images of the person. Our approach has five steps: 1) Face detection 2) Eye candidate detection 3)

    Tuning candidate points 4) Iris pair selection and 5) Eye Analysis. Experimental results for iris pair localization

    and eye state identification are shown separately. For testing three public image databases, Yale, BioID and

    Bern are used. Extensive experiments have shown the effectiveness and robustness of the proposed method.

    Keywords Iris pairs, Eye candidate, Eye analysis, Eye state

    1.IntroductionMonitoring a drivers visual attention is very important for

    detecting fatigue, lack of sleep, and drowsiness. By

    automatically detecting eye state and drowsiness level of driver,

    an alarm can be activated to inform driver or other authority

    which can avoid a large number of road accidents. Robust eye

    detection is crucial step for this kind of application. After robust

    eye detection, information about the gaze, eye blinking, and

    drowsiness can be determined. Some work has been done on this

    subject but the problem is still far from being fully solved. Some

    algorithms for eye detection have obtained good results such as

    [1] but they cant points out exact center of iris which can further

    be used for drowsiness detection. Generally eye detection is

    achieved using active or passive techniques. Active techniques

    are based on spectral characteristics of eye under IR

    illumination.[2]. These techniques are very simple and give good

    results but the success rates of such systems require stable

    lighting conditions and person close to camera. Passive

    techniques locate eyes based on their different shape and

    appearance from face. In these techniques, generally, first face is

    detected to extract eye regions and then eyes are localized using

    eye windows. Much work has been done on face detection and

    there are robust algorithms available [3]. However robust and

    precise eye detection is still an open problem. After eye detection,

    several measures can be used to determine eye state and detect

    drowsiness. Many efforts have been made to detect drowsiness

    among drivers [4,5]. Eye blinking is a good measure of detecting

    the level of drowsiness. PERCOLS (the percentage of time that

    an eye is closed time in a given period) is one of the best

    methods to measure the eye blinking as high PERCOLS scores

    are strongly related to [6]. However, in this paper, we use a still

    image and first detect the centers of eyes and then determine eyestate by comparing the eyes openness in the test image with that

    of original image taken when subject is normal or alert. This

    kind of system can be used with a camera which input a still

    image periodically to the system and the system determine the

    eye state in real time and if subject eyes state is close for more

    than a few input samples, an alert can be activated to show that

    driver is drowsy.

    2.Outline Of Proposed MethodThe algorithm detects the face in an input image using

    AdaBoost algorithm. By changing the training data and

    increasing the false positive rate of the AdaBoost algorithm, we

    detect the candidate points for the irises. The candidate points

    produced by AdaBoost are tuned such that two of the candidate

    points are exactly in the center of iris. Mean crossing function

    and convolution template are used to select irises of both eyes

    from the tuned candidate points. After locating iris pair, eyes are

    analyzed to determine their state. Fig.1 shows the steps in

    localizing iris centers of eyes.

    Fig.1: steps of the proposed algorithm using a test image.

  • 8/2/2019 CA080044

    2/4

    3.Face DetectionWe first detect the face in the input image. Then problem is

    simplified due to the background being restricted to the face. It

    saves searching time and improves accuracy. For face detection,

    Violas method [3] is used. A robust face classifier is obtained by

    supervised AdaBoost learning. Given a sample set of training

    data {xi, yi}, the AdaBoost algorithm selects a set of weak

    classifiers {hj(x)} from a set of Haar-like rectangle features and

    combine them into a strong classifier. The strong classifier g(x) is

    defined as follow:

    ( )( )

    = =otherwise

    xhxg

    k

    k

    kk

    0

    1max

    1

    (1)

    where is the threshold that is adjusted to meet the detection

    rate goal. The Haar-like rectangle features are easily computed

    using integral image representation. The cascade method

    quickly filters out non-face image areas. More details can befound in [3].

    4.Eye Candidate DetectionBy changing the training data and increasing the false-positive

    rate of the algorithm in section 3, we build an eye candidate

    detector. The training data of [7] is used to detect several eye

    candidate points in face region. A total of 7000 eye samples are

    used with the eye center being the center of the image and resize

    to 16*8 pixels. Because in this step face region is already

    detected, so the negative samples are taken only from the face

    images. We set low threshold and accept more false positive. Onthe average, we get 15 eye candidates out of the detector.

    5.Tuning Candidate PointsWe shift the candidate points within a small size of

    neighborhood so that two of the candidate points are exactly in

    center of irises. The separability filter proposed by Fukui and

    Yamaguchi [8] is utilized in an efficient way to shift the

    candidate points within a small size of neighborhood. By using

    the template in Fig.2, the separability value () is computed for

    each point in the neighborhood by the following equation.

    AB=

    ( )( )=

    =N

    i

    miiPyxIA

    1

    2

    ,

    ( ) ( 2222

    11 mm PPnPPnB += )

    }UL RR ,...

    (2)

    where nk(k = 1; 2) is the number of pixels in Rk; N = n1+n2; Pk

    (k=1; 2) is the average intensity in Rk; Pm is the average intensity

    in the union of R1 and R2, and I (xi; yi) the intensity values of

    pixels (xi; yi) in the union of R1 and R2.

    Separability values for each of the point in the neighborhood

    are determined by varying the radius in a range{ }. The

    point in the neighborhood which gives maximum separability is

    considered as the new candidate point. We also find the

    separability values for each new candidate point and its

    corresponding optimal radius R among { [9]. Theseseperability and radius values for new candidate points are

    used later. Fig 1(d) shows the tuned candidate points.

    UL RR ,...

    Fig . 2 An eye template (R1 is the inside region of the smaller

    circle and R2 is the region between the two concentric circles).

    6.Iris Pair SelectionWe combine three metrics to measure the fitness of

    each candidate point with iris and select iris pair.

    Mean crossing function

    A rectangular subregion is formed around each iris candidate.

    The size of the subregion is depicted in Fig. 3, where R is the

    radius of the candidate determined in section 5.

    ( )ji,

    Fig. 3: Subregion for mean crossing function

    A subregion of the form shown in Fig. 3 is formed around each

    candidate point. The subregion is scanned horizontally and the

    mean crossing function [10] for pixel is computed as

    follows:

    ( ) ( )( ) ( )

    ++

    ++

    =

    otherwise

    AjiIifthenjiIIf

    AjiIifthenjiIIf

    jiC

    ;0

    1,A,;1

    1,A,;1

    ),(

    = =

    =M

    i

    N

    j

    subregion jiCC

    1 1

    ),(

    (3)

    where A is a constant. The horizontal mean crossing value for

    the subregion is determined as

    (4)In a similar way, vertical mean crossing function is

    evaluated by scanning vertically the subregion. To find the

    final mean crossing value for the subregion, we linearly

    add both mean crossing numbers.Convolution with edge image subregion

    First we find the edge image of the subregion around the

    candidate point. The size of the subregion is the same as the

    mask in Fig. 4. The subregion is convolved with the convolution

    kernel shown in Fig. 4 with the edge image of the subregion.

    The radius of the template is equal to the radius of the candidate

    determined in section 5. The center of the template is placed on

    the candidate point and the value of convolution is determined.

    The process is repeated for each of the candidate. The resultant

    signal from convolution is summed up and a single value is

  • 8/2/2019 CA080044

    3/4

  • 8/2/2019 CA080044

    4/4

    .2 Eye State Identification

    es of the same subject with varying

    am

    8

    Fig. 9 and 10 shows imag

    ount of eye closure. The detected radius and the state

    classified by the algorithm are shown. For both subjects the face

    image determined in section 3 is in the range of 140 X 140 to160 X 160 and threshold chosen is 3.

    (a) (b) (c)

    Fi ified as

    Clas

    g. 9: (a) Detected Iris Radius by algorithm = 4, Class

    Open Eyes. (b) Detected Iris Radius by algorithm = 3,

    sified Open Eyes. (c)Detected Iris Radius by algorithm = 2,

    Classified as Closed eyes.

    (a) (b) (c)

    fied as

    Clas

    he left most images show the normal state of eyes. For both

    sub

    9.Conclusion and Future Workhe paper attempts to determine eye state by first detecting

    iris

    References

    [1] . D'Orazio, M. Leo, A. Distante, Eye detection in face

    [2] C , M. Flickner Pupil

    [3]V n Using a

    [4] to, T., Adachi, K., Nakano, T. and

    2003

    ng, "Real-time eye, gaze, and face

    [6]

    [7] tana, Javier Lorenzo-Navarro,

    [8] . Yamaguchi, Facial feature point extraction

    [9] ction using intensity

    [10] g Lin and Ja-Ling, Automatic facial feature

    [11] acedb/index.php

    ml

    images for a driver vigilance system, 2004 IEEE

    intelligent vehicles symposium.

    .H. Morimoto, D. Koons, A. Amir

    detection and tracking using multiple light sources Image

    and Vision Computing 18, (2000) 331-335

    iola, P., Jones, M.: Rapid Object Detectio

    Boosted Cascade of Simple Features. In ComputerVision and Pattern Recognition Conference 2001, 1

    (2001) 511-518

    Hamada, T., I

    Yamamoto, S., Detecting method for drivers'

    drowsiness applicable to individual features", In Proc.

    Intelligent Transportation Systems, Vol. 2,

    ,Page(s): 1405 - 1410.

    [5] Qiang Ji and Xiaojie Ya

    pose tracking for monitoring driver vigilance," Real-

    Time Imaging, Vol. 8 , Issue 5, 2002,Pages: 357 - 377.

    Dinges, D. and Grace, R, "PERCLOS: A ValidPsychophysiological Measure of Alertness as Assessed

    by Psychomotor Vigilance",1998, TechBrief

    FHWAMCRT- 98-006.

    Modesto Castrillon-San

    Oscar Deniz-Suarez, Jose Isern-Gonzalez, and

    Antonio Falcon-Martel, Multiple Face Detection at

    Different Resolutions for Perceptual User

    Interfaces. 2nd Iberian Conference on Pattern

    Recognition and Image Analysis , LNCS 3522, pp. 445

    452, 2005.

    K. Fukui, O

    Fig. 10: (a) Detected Iris Radius by algorithm = 4 Classi

    Open Eyes. (b) Detected Iris Radius by algorithm = 2

    sified as Closed Eyes. (c) Iris pair not detected in section 6

    Classified as Closed eyes.method based on combination of shape extraction and

    pattern matching, Trans. IEICE Japan J80-D-II (8)

    (1997) 21702177(in Japanese).

    T. Kawaguchi, M. Rizon, Iris deteT

    jects the normal iris radius = 4. For these images iris radius =

    3 can be taken as a good threshold to classify subjects as drowsy.and edge information, Pattern Recognition 36 (2003)

    549562.

    Chun-Hun

    extraction by genetic algorithms, IEEE Trans. Image

    Process. 8 (6) (1999)10577149.

    http://www.bioid.com/downloads/f

    T

    pair and then analyzing it. We achieve eye state identification

    in five steps: (1) Face Detection (2) Eye candidates detection (3)

    Tuning candidate points and (4) iris selection (5) Eye analysis.The contribution of this paper mainly starts from step 3, after eye

    candidate points are found by AdaBoost. In step 3 candidate

    points are shifted in such a way that greatly improves the

    accuracy of iris localization and find radius values for candidate

    points which also include irises of both eyes. Step 4 utilized three

    metrics and can robustly filter out candidate points. For testing

    purposes, three popular databases, Bern, Yale and BioID are

    used. We will further work on the algorithm to make iris pair

    localization more robust and add certain metrics to the eye

    analysis step which can precisely determine the level of

    drowsiness and eye state more automatically and robustly.

    [12] http://cvc.yale.edu/projects/yalefaces/yalefaces.html

    [13] http://iamwww.unibe.ch/~kiwww/staff/achermann.ht

    T

    http://www.bioid.com/downloads/facedb/index.phphttp://cvc.yale.edu/projects/yalefaces/yalefaces.htmlhttp://cvc.yale.edu/projects/yalefaces/yalefaces.htmlhttp://www.bioid.com/downloads/facedb/index.php