Smart White Board Applications Using Lcd Screen (2011)

Embed Size (px)

DESCRIPTION

Smart White Board Applications Using Lcd Screen (2011) (computer science , gestures , image processing , c#)

Citation preview

  • Smart White Board Applications Using LCD Screen Touchy Screen

    2011

    Team Members:

    Ahmad Ahmad Abdallah Computer Science

    Ahmed Mohammed Naguib Computer Science

    Ahmed Talaat Bakr Computer Science

    Islam Ibrahim Mohammed Computer Science

    Khaled Mohamed Badr Computer Science

    Supervisors

    Dr. Taymoor Nazmy

    T.A Menna Mostafa

  • Smart White Board Applications Using LCD Screen 2011

    1

    Acknowledgment

    Give the Credit to whom credit is due for this reason, we would like to

    appreciate those who gave us their hand and helped us to accomplish this project,

    those whom without their support and effort such project would never been

    between your hands right now. We would like to thank our supervisors who have

    supervised us day and night and managed everything for us that helped the

    project to be done the best way. Dr. Taymoor Nazmy, the Dr. who gave us his

    precious time and provided us with valuable knowledge and feed us with his great

    experience that professionally helped our project and T.A Mennat-Allah Mostafa

    For her Help.

  • Smart White Board Applications Using LCD Screen 2011

    2

    Abstract

    Touch screens have become familiar in everyday life, they became popular by the

    start of the 21st century, and they have been developed tremendously and

    became portable as in mobile phones nowadays. Although touch screens are a

    highly desirable user interface this technology has not yet been available with PCs

    to end user due to their high costs endured when applied with large screens. Until

    recently, touch screens could only sense one point of contact at a time, and few

    have had the capability to sense multi touch at a time. This is starting to change

    with the commercialization of multi-touch technology.

    This project is a human computer interaction system aimed to make the LCD

    screen as a multi touch screen, the project is an extended project of Interactive

    wall 2010, we aimed to handle the future work of Interactive wall 2010 and

    add the following to the system:

    Migrating to LCD Screen

    Enhancing Last Project segmentation Module

    New Smart Physical Environment has been Built to Overcome the Last

    Project Limitation

    By implementing all of the above points we could be able to build Smart

    whiteboard application that hopefully could be used by our teaching staff.

  • Smart White Board Applications Using LCD Screen 2011

    3

    Contents

    Acknowledgment 1

    Abstract 2

    Problem Definition 6

    Challenges 8

    Methodology 8

    1-Introduction 9

    Motivation 10

    Similar Applications 11

    The disadvantages of the similar applications 12

    Objectives 13

    Framework overview 14

    2- Physical Environment 17

    Physical Components 18

    LCD Screen 18

    Three Webcams 18

    HDMI Cable 18

    Hardware Specifications 19

    Camera Position 20

    System Adjustment 21

    System Advantages 22

    System Cost 22

    3 - Calibration Module 24

    Overview 24

    Heuristic Algorithm 24

    Challenges 25

    4- Segmentation Module 26

    Overview 27

    Skin Detection Algorithm 27

    Conclusion 38

  • Smart White Board Applications Using LCD Screen 2011

    4

    Challenges 38

    5- Tracker Module 39

    Overview 40

    Haar-Like Features 40

    Optical Flow Method 42

    Hand Tracking Algorithm Steps 45

    Module result 46

    Challenges 47

    6 - Gesture Recognition Module 49

    Overview 49

    Algorithm 49

    Experimental Results 50

    Challenges 51

    7 - Touch Module 53

    Algorithm 53

    Explanation 53

    Experimental Result 54

    Challenges 54

    8 - Events Module 56

    Overview 56

    Explanation 56

    9 - System Controller Module 57

    Overview 57

    Explanation 57

    System Flow Diagram 58

    Applications 59

    Photo Application 60

    Simple Paint Application 62

    Conclusion 64

  • Smart White Board Applications Using LCD Screen 2011

    5

    Future Work 65

    Appendices 67

    Appendix A : User Manual 67

    Appendix B : Methods for determining optical flow 71

    Phase correlation 71

    Block Based Methods 71

    Differential methods of estimating optical flow 71

    Lucas-Kanade Method 71

    HornSchunck method 71

    BuxtonBuxton method 71

    BlackJepson method 71

    Lucas Kanade Method 71

    References 74

  • Smart White Board Applications Using LCD Screen 2011

    6

    [ Touchy Screen ]

    [Chapter 1]

  • Smart White Board Applications Using LCD Screen 2011

    7

    Problem Definition

    Human interacts normally with another human by using motions; it might be

    annoying and impractical to use hardware equipment to interact with

    someone/something. It would be more comfortable, effective and user friendly if

    the user could interact directly with the display device without any hardware

    equipment, so the project goal is to deliver large interactive multi touch screen

    with appropriate cost, efficiency and ease of use in real life applications.

    Interactive screen uses a projector and multi cameras to detect hand gestures and

    the Z-Depth information to allow multi touch sense on the screen , the user can

    interact with the computer using predefined gestures that map to certain events,

    the events can be drag, move, resize, zoom, etc.

  • Smart White Board Applications Using LCD Screen 2011

    8

    Challenges

    Our teamwork has faced many challenging during our work in the project, starting

    from finding a suitable new physical environment and a suitable cameras position

    to the challenges we did face in each module in our system (i.e. hand

    segmentation, hand tracking, ). Each module challenges will be discussed in

    detail in its chapter.

    Methodology

    Many methodologies have been studied and implemented to be able to

    accomplish our objectives, although the methodologies will be discussed in detail

    in the upcoming chapter, but we will give an abstract idea about them in the

    following point:

    1. Haar-Like Features

    2. Optical Flow(Lucas-Kanade)

    3. Convex Hull Using Active Contours

    4. Heuristic Skin Detection in RGB Mode

  • Smart White Board Applications Using LCD Screen 2011

    9

    [ Touchy Screen ]

    [Chapter 2]

  • Smart White Board Applications Using LCD Screen 2011

    10

    Introduction

    Motivation

    As we live in the technology age, one can find the incredible growth of

    applications which use touch screen feature and concerning about HCI 1(Human

    Computer Interaction) Interfaces, future requires such usability for HCI becomes

    very friendly to most of the end users. Current multi-touch screens are expensive,

    the bigger the touch screen is, the more expensive it becomes, therefore our

    system provides large size of touch screen with appropriate cost. One of our

    biggest motivations is to continue the future work of Interactive Wall 2010

    project, and we became very enthusiastic due to the results of the last year

    project were stunning, so that, we decided to improve their work and add more

    features to the system.

    Interactive whiteboard for primary classrooms

  • Smart White Board Applications Using LCD Screen 2011

    11

    Similar Applications

    Similar applications were found during the search phase, one of them is a projector system but it uses an infrared camera and let the user interact with the screen using an infrared pen. See Figure 1-2 -Interactive Projector System that uses infrared pens . Other similar applications use a projector and a single camera but make the user

    wear gloves to mark the hand, and the user can interact with the screen using the

    hand gestures, See Figure 1-1 - System using hand gesture with gloves .

  • Smart White Board Applications Using LCD Screen 2011

    12

    Figure 1-2 -Interactive Projector System that uses infrared pens

    The disadvantages of the similar applications

    - Using special equipment which is the infrared pen.

    - Using hand gloves to mark the hand which is impractical to the user.

    - Our system overcomes all of the above limitation, the user is able to deal

    with the hand with no special equipment needed to be handed, the user

    need only to use his /her hand to interact.

  • Smart White Board Applications Using LCD Screen 2011

    13

    Objectives

    1. Flexibility: The new framework of interactive wall should provide more flexibility in use. 2. Automatic Hand Detection: The new framework should be able detect the hand from any point on the screen despite Interactive Wall 2009, which restrict the user to enter from a predefined entry point. 3. Touch Detection : The framework should be able to detect any touching that occurs to the screen

    4. Performance :

    The framework is more faster than the other frameworks .

  • Smart White Board Applications Using LCD Screen 2011

    14

    Framework overview

    Inetrface Module

    System Controller Module

    Events Module

    Gesture Recognition

    Segmentation Module

    Tracker Module

    Touch Module

    Calibration Module

    Input Module

  • Smart White Board Applications Using LCD Screen 2011

    15

    Input Module

    Capture Images from the three Cameras.

    Calibration Module

    Put Four Green Points around the Screen To Get the Height and Width of the

    Screen And Screen Position.

    Touch Module

    This Module Is Responsible To Detect if There Is Touch or Not and Fire The Event

    When there is Touch.

    Tracker Module

    This Module is Responsible for hand Tracking The algorithms used is a

    combination of haar like features classifiers and pyramidal lucas kanade optical

    flow algorithm.

    Segmentation Module

    This module is responsible to segment the captured frame to generate an output

    binary image represents the hand as the white pixels only, for the first time this

    modules segment the whole captured frame from the first camera, but in the

    upcoming frames the segmentation module segment only the search window2 to

    segment the hand, and then the segmented image is passed to the gesture

    recognition module.

    Gesture Recognition Module

    Takes the binary image resulting from segmentation module, And Use The Active

    Contours and returns the number of fingers.

  • Smart White Board Applications Using LCD Screen 2011

    16

    Events Module

    This module is responsible to map the incoming gesture type to an actual

    computer event.

    System Controlling Module

    The main task of this module is to control between the independents modules in

    our system; it controls the data flow and the interactions between other

    independents modules.

    Interface Module

    For a good design aspect; this module has been implemented to separate the

    graphical user interfaces from the logic modules, so simply this module is consist

    of some views/windows forms for the user interfacing.

  • Smart White Board Applications Using LCD Screen 2011

    17

    [Touchy Screen]

    [Chapter 3]

  • Smart White Board Applications Using LCD Screen 2011

    18

    Physical Environment

    The physical environment for Smart White Board interactive screen consists of

    3 web cameras and Wall an LCD Screen. It so simple but it has drawbacks:

    . We add second and third camera to detect if user touched the screen or not.

    Physical Components

    LCD Screen

    Three Webcams

    HDMI Cable

  • Smart White Board Applications Using LCD Screen 2011

    19

    Hardware Specifications

    Web Camera Eye Toy

    Still image 1.3 Megapixels Video Resolution 1.3 Megapixels

    Quality For Video Medium Frame rate 30 fps

  • Smart White Board Applications Using LCD Screen 2011

    20

    Web Camera Microsoft LifeCam Cinema

    Still image 5 megapixel interpolated (2880 x 1620)

    Video Resolution 1280 X 720 pixel resolution (HD)

    Quality For Video Very High

    Frame rate Up To 30 fps

    Machine (Computer)

    Processor: Intel Core2Duo Centrino 2.53 GHZ

    System Type:64 Bit Operating System

    Ram:4.00 GB

    Camera Position

    First Camera is on The Top Of The Screen Facing Forward

    Second Camera is on the Side Of The Screen

    Third Camera is is Above The Screen Looking Onto The Screen

  • Smart White Board Applications Using LCD Screen 2011

    21

    System Adjustment

    First Camera

    Second Camera

    Third Camera

    Touch

    Touch

    Gestures

  • Smart White Board Applications Using LCD Screen 2011

    22

    System Advantages

    1. No Shadow effect.

    2. Minimize Blind areas.

    3. User hand gesture is clear.

    4. Flexibility for turning the lights on.

    5. It acts like large touch screen with low cost.

    6. There is no projector so the projector light doesn't effect on hand user, so the segmentation using skin detection being an easy task.

    System Cost

    Primary Camera 450 L.E

    Second and Third Camera 200 L.E LCD Screen 46 Inch 6666 L.E

    HDMI Cable 50 L.E

    Total 7366

  • Smart White Board Applications Using LCD Screen 2011

    23

    [Touchy Screen]

    [Chapter 4]

  • Smart White Board Applications Using LCD Screen 2011

    24

    Calibration Module

    Overview

    It is responsible for acquiring the settings needed by the touch module in

    order to work in the users environment; it detects the four corners using

    the Green Points using the Side Camera And Then gets The Height ,Width

    and position of The Screen .

    Heuristic Algorithm

    For Each Input Image:

    1 - Check if point is a Green Point

    2 - IF First Point & Second Point are found

    Calculate the Difference between the positions of two points

    Break;

    3 - Else

    Go to 1;

  • Smart White Board Applications Using LCD Screen 2011

    25

    Challenges

    - The algorithm is dependent on the environment , so every time the

    camera position is changes , the calibration should be restarted .

  • Smart White Board Applications Using LCD Screen 2011

    26

    [Touchy Screen]

    [Chapter 5]

  • Smart White Board Applications Using LCD Screen 2011

    27

    Segmentation Module

    Overview

    The segmentation module is responsible of one main task, the task is to generate

    a binary image from the captured image represents the foreground; which is the

    user in our system, then a method is needed to detect the hand from the binary

    image generated, this module is used initially to segment the whole captured

    image for first frame but then we use it to segment only a part of the captured

    frames called search window, which is the window predicts the position of the

    hand in each frame.

    Skin Detection Algorithm

    We have tries 5 algorithms in order to reach the most satisfying algorithms for

    segmentation module . These algorithms are discussed in the following sections :

  • Smart White Board Applications Using LCD Screen 2011

    28

    1- A Robust Method for Skin Detection and Segmentation

    Algorithm Description Diagram

    Image

    Color Corrected

    Image Segmented

    Image

    Image in

    YCbC

    r Mode

  • Smart White Board Applications Using LCD Screen 2011

    29

    Experimental Result

    Hand Detection

  • Smart White Board Applications Using LCD Screen 2011

    30

    2- Improved Automatic Skin Detection in Color Images

    Algorithm Description Diagram

    Image Segmented

    Image

  • Smart White Board Applications Using LCD Screen 2011

    31

    Experimental Results

  • Smart White Board Applications Using LCD Screen 2011

    32

    3- Heuristic Skin Detection in RGB Mode

    Algorithm Description Diagram

    Image Segmented Image

  • Smart White Board Applications Using LCD Screen 2011

    33

    Experimental Result

  • Smart White Board Applications Using LCD Screen 2011

    34

    4- Explicit Skin color classifier based on RGB components

    Algorithm Description Diagram

    Image

    Normalized Image

    Segmented Image Based on Skin Region

  • Smart White Board Applications Using LCD Screen 2011

    35

    Experimental Result

  • Smart White Board Applications Using LCD Screen 2011

    36

    5- Skin Detection using HSV color space

    Algorithm Description Diagram

    Image

    Image in HSV

    Mode

  • Smart White Board Applications Using LCD Screen 2011

    37

    Experimental Result

  • Smart White Board Applications Using LCD Screen 2011

    38

    Conclusion

    We have implemented all these algorithms and compared their results , we used the YcbCr Algorithm in

    the touch and in the recognition modules as a segmentation algorithm as it had the most stable result in

    different lightning .

    Challenges

    - Skin Based Segmentation techniques are so sensitive to lightning and change in

    environment .

    - Any object with a color that is similar to the skin color might be incorrectly classified as skin.

  • Smart White Board Applications Using LCD Screen 2011

    39

    [Touchy Screen]

    [Chapter 6]

  • Smart White Board Applications Using LCD Screen 2011

    40

    Tracker Module

    Overview

    The algorithm implemented is a combination from haar like features classifiers

    and optical flow tracking method to detect the hand successfully .

    First we obtain the region of interest which is the hand by detecting it with a haar

    classifier then we compute the features that will be tracked using optical flow

    tracking method.

    Haar-Like Features

    Overview

    In This Algorithm we Use The Haar Like Features which is A recognition process

    can be much more efficient if it is based on the detection of features that encode

    some information about the class to be detected. This is the case of Haar-like

    features that encode the existence of oriented contrasts between regions in the

    image. A set of these features can be used to encode the contrasts exhibited by a

    human face and their spacial relationships. Haar-like features are so called

    because they are computed similar to the coefficients in Haar wavelet transforms.

    The object detector of OpenCV has been initially proposed by Paul Viola and

    improved by Rainer Lienhart. First, a classifier (namely a cascade of boosted

    classifiers working with haar-like features) is trained with a few hundreds of

    sample views of a particular object (i.e., a face or a car), called positive examples,

    that are scaled to the same size (say, 20x20), and negative examples - arbitrary

    images of the same size.

  • Smart White Board Applications Using LCD Screen 2011

    41

    After a classifier is trained, it can be applied to a region of interest (of the same

    size as used during the training) in an input image. The classifier outputs a "1" if

    the region is likely to show the object (i.e., face/car), and "0" otherwise. To search

    for the object in the whole image one can move the search window across the

    image and check every location using the classifier. The classifier is designed so

    that it can be easily "resized" in order to be able to find the objects of interest at

    different sizes, which is more efficient than resizing the image itself. So, to find an

    object of an unknown size in the image the scan procedure should be done

    several times at different scales.

    The word "cascade" in the classifier name means that the resultant classifier

    consists of several simpler classifiers (stages) that are applied subsequently to a

    region of interest until at some stage the candidate is rejected or all the stages

    are passed. The word "boosted" means that the classifiers at every stage of the

    cascade are complex themselves and they are built out of basic classifiers using

    one of four different boosting techniques (weighted voting). Currently Discrete

    Adaboost, Real Adaboost, Gentle Adaboost and Logitboost are supported. The

    basic classifiers are decision-tree classifiers with at least 2 leaves. Haar-like

    features are the input to the basic classifers.The feature used in a particular

    classifier is specified by its shape , position within the region of interest and the

    scale (this scale is not the same as the scale used at the detection stage, though

    these two scales are multiplied).

  • Smart White Board Applications Using LCD Screen 2011

    42

    Optical Flow Method

    Overview

    Optical flow is the pattern of apparent motion of objects , surfaces and edges in a

    visual scene caused by the relative motion between the observer ( an eye or a

    camera ) and the scene . it is used in many fields such as motion detection , object

    segmentation time-to-collision and focus of expansion calculations, motion

    compensated encoding, and stereo disparity measurement utilize this motion of

    the objects' surfaces and edges . Optical flow Can be used also for video

    compression and estimation of Three Dimensional nature and structure of the

    scene , as well as the 3D motion of objects and the observer relative to the scene .

    And it was used by the robotics researchers in many areas such as object

    Detection and tracking , image dominant plane extraction, movement detection,

    robot navigation and visual odometer. Optical flow information has been

    recognized as being useful for controlling micro air vehicles.

    Estimation of Optical Flow

    Sequences of ordered images allow the estimation of motion as either

    instantaneous image velocities or discrete image displacements . The optical flow

    methods try to calculate the motion between two image frames which are taken

    at times t and t + t at every voxel position. These methods are called differential

    since they are based on local Taylor series approximations of the image signal;

    that is, they use partial derivatives with respect to the spatial and temporal

    coordinates.

    For a 2D+t dimensional case (3D or n-D cases are similar) a voxel at location

    (x,y,t) with intensity I(x,y,t) will have moved by x, y and t between the two image frames, and the following image constraint equation can be given:

    I(x,y,t) = I(x + x,y + y,t + t)

    Assuming the movement to be small, the image constraint at I(x,y,t) with Taylor series can be developed to get:

  • Smart White Board Applications Using LCD Screen 2011

    43

    H.O.T.

    From these equations it follows that:

    Or

    which results in

    where Vx,Vy are the x and y components of the velocity or optical flow of I(x,y,t)

    and , and are the derivatives of the image at (x,y,t) in the corresponding

    directions. Ix, Iy and It can be written for the derivatives in the following.

  • Smart White Board Applications Using LCD Screen 2011

    44

    Thus:

    IxVx + IyVy = It

    or

    This is an equation in two unknowns and cannot be solved as such. This is known as the aperture problem of the optical flow algorithms. To find the optical flow another set of equations is needed, given by some additional constraint. All optical flow methods introduce additional conditions for estimating the actual flow.

    Methods for determining optical flow

    There are many algorithms to implement optical flow method, in our system we

    used pyramidal Lukas Kanade algorithm. Other methods are discussed in

    appendix B .

  • Smart White Board Applications Using LCD Screen 2011

    45

    Hand Tracking Algorithm Steps

    1. create samples :a few hundreds of sample views of a hand, called positive examples, that are scaled to the same size (say, 20x20), and negative examples - arbitrary images of the same size.

    2. training process: the resultant classifier consists of several simpler classifiers (stages) that are applied subsequently to a region of interest until at some stage the candidate is rejected or all the stages are passed.

    3. Dectection:After a classifier is trained, it can be applied to a region of interest (of the same size as used during the training) in an input image. The classifier outputs a "1" if the region is likely to show the object (i.e., face/car), and "0" otherwise. To search for the object in the whole image one can move the search window across the image and check every location using the classifier. The classifier is designed so that it can be easily "resized" in order to be able to find the objects of interest at different sizes, which is more efficient than resizing the image itself. So, to find an object of an unknown size in the image the scan procedure should be done several times at different scales.

    4. From The Tracking Area determined by the haar classifier we Get the Hand Features To Track .

    5. Then Using Optical Flow we capture the current Frame and the next frame then we pass them to a Function that computes the next features to track .

    6. The previous step returns The Features Position and then Draw in that position

  • Smart White Board Applications Using LCD Screen 2011

    46

    Module result

  • Smart White Board Applications Using LCD Screen 2011

    47

    Challenges

    - Train the a hand haar classifier you want a big number of hand pictures

    data set and a big number of negative pictures.

    - The process of training the haar-like classifier is very time consuming and takes a lot of effort .

    - Over Training the haar-like classifier makes it too slow .

    - A haar classifier might not detect the hand with a high complex background.

    - Optical flow method might have some problem with hands overlapping the face.

  • Smart White Board Applications Using LCD Screen 2011

    48

    [Touchy Screen]

    [Chapter 7]

  • Smart White Board Applications Using LCD Screen 2011

    49

    Gesture Recognition Module

    Overview

    This module is responsible to recognize gestures, Using Active Contour.

    Gesture recognition module is responsible for recognizing the gesture from the

    given binary image, returning the event that is associated with the recognized

    gesture; we used the gesture name and according to the gesture name we fire the

    desired event.

    At the begging take the pictures from segmentation module as a binary image

    contain the hand, and then make a convex-Hull on the hand using Active Contour

    To count numbers of fingers and detect how many number of fingers in the

    picture.

    Algorithm

    Calculate the blobs from the binary image

    Get biggest blob

    Calculate Active Contour

    Count Number of Fingers

  • Smart White Board Applications Using LCD Screen 2011

    50

    Experimental Results

    Five Fingers Two Fingers

    One Finger

    Three Fingers

  • Smart White Board Applications Using LCD Screen 2011

    51

    Challenges

    - The algorithm used requires the input images to be correctly segmented

    - Some result are not accurate due to the non-clarity of the made gesture

  • Smart White Board Applications Using LCD Screen 2011

    52

    [Touchy Screen]

    [Chapter 8]

  • Smart White Board Applications Using LCD Screen 2011

    53

    Touch Module

    Capture an image From The Side Camera and Segment this Image using Our

    Segmentation and Then Get The Biggest Blob and that is The Hand we get its

    position and get The Difference between that position and the Screen position If

    Within Acceptable Range for Example: 10 then there is Touch Then Capture From

    The Upper Camera An Image And Then Segment it and get the biggest blob and

    its position Then Map to The Co-Ordinates of The Screen and then fire The Event.

    Algorithm

    1. Capturing each image from the stream. 2. Segment the frame using YCbCr

    3. Find The Biggest Blob in the picture (which is the hand) with special dimensions.

    4. Measure the distance of the biggest blob from a certain virtual line (the vertical line represents the screen position). 5.Capture using the Third Camera For Mapping

    Explanation

    1. Capturing Each Image from The Stream: Capture Images From The Side Camera And The Upper Camera

    2. Segment The Hand only From the Whole Image: In This Step we segment the object of interest in our case it is the users hand to

    process it and know its position simply by using The YCbCr Segmentation

    Algorithm

    3. Find the Biggest Blob in the picture (Hand): This step is to recognize the object if interest in this task it is the users hand, using the biggest blob technique from AFORGE Library, we could easily get the biggest blob in the image which is the users hand and discard all other noise if exists this will give us the capability of measuring the exact distance between hand and Screen

  • Smart White Board Applications Using LCD Screen 2011

    54

    4. Measure the distance of the biggest blob from Screen This Step We Calculate The Difference between The Blob and The Screen and Compare it to A Certain Value if its Less than the value then there is Touch 5.Capture using the Third Camera For Mapping We Capture image from above the Screen an Image and Then Segment it and get the biggest blob and its position Then Map to The Co-Ordinates of The Screen and then fire The Event.

    Experimental Result

    Challenges

    - Cameras used for touch should be placed far from the screen in order to view

    total width and height of the screen .

    X Position

    Y Position

  • Smart White Board Applications Using LCD Screen 2011

    55

    [Touchy Screen]

    [Chapter 9]

  • Smart White Board Applications Using LCD Screen 2011

    56

    Events Module

    Overview

    This module is responsible for handling and firing events, when static shape

    recognition module recognize the gesture shape it sends to the events module

    the gesture arguments which checks on gesture name, hand position and gesture

    type(one hand , two hands and dynamic) and fire the event to this gesture under

    some conditions.

    Explanation

    For Each Recognized Image return number of fingers for each number there is

    an event for it, for example:

    Left Click

    Double Click

    Right Click

    Move Mouse

  • Smart White Board Applications Using LCD Screen 2011

    57

    System Controller Module

    Overview

    The controller module manages the interaction and data flow between all other

    modules in the framework, in other words it handles all the function calls to other

    modules

    Explanation

    First the two cameras that responsible for touch send two images to calibration

    module to get some information from it that will be helpful in touch detection, for

    instance, Width and Height of screen in the images using green marks.

    Second the system enter into loop until it is closed the circle as the follow

    - Segmentation Module Receive three images form three cameras

    - System detect which mode of the system will be active

    - At the gesture mode the front camera send images to segmentation

    module to get hand from the image.

    - Then the hand image send this image to recognition module and get

    number of fingers

    - for each number passes to event module to fire its specific event

    and repeat the loop again.

    - if the touch module detected the upper and beside cameras send two

    images with the information that calculated In calibration module to

    touch module and calculate the X and Y Position and then map it to actual

    pixels to get the actual X and Y point in real LCD Screen and the enter the

    loop again.

  • Smart White Board Applications Using LCD Screen 2011

    58

    System Flow Diagram

    C1

    C1

    C1

    Calibrate Images to get

    Touch Information

    Segment Image

    Detect

    System

    Fire Touch Module

    Track Hand

    Recognize Gesture

    Fire event

    Touch

    Gesture

  • Smart White Board Applications Using LCD Screen 2011

    59

    [Touchy Screen]

    [Chapter 10]

    Applications

  • Smart White Board Applications Using LCD Screen 2011

    60

    Photo Application

    Overview

    A program that enables the user to :

    - open images

    - move opened images around the screen .

    - Resize the opened images by increasing or decreasing size .

    Experimental Result

    -

  • Smart White Board Applications Using LCD Screen 2011

    61

  • Smart White Board Applications Using LCD Screen 2011

    62

    Simple Paint Application

    Overview

    A program that enables the user to :

    - Free Drawing on the screen

    - Choosing the size of the brush and color of the pen .

    Experimental Result

  • Smart White Board Applications Using LCD Screen 2011

    63

    [Touchy Screen]

    [Chapter 11]

  • Smart White Board Applications Using LCD Screen 2011

    64

    Conclusion

    1- We managed to build an interactive smart board based on Webcam and LCD

    Screen.

    2-The system is capable of Recognizing Four Type of Gestures and detecting

    Touch on the screen .

    3- We conclude that using A Segmentation Algorithm that doesnt depend on Skin

    Color is better because it wont be affected by lighting conditions.

    4-System Can be Implemented Easily in Any Classroom .

    5-System Can be implemented in any laptop Camera.

  • Smart White Board Applications Using LCD Screen 2011

    65

    Future Work

    - Using Kinect Cameras for Gesture Recognition.

    - Using Infrared for Touch detection .

    - Enhancing Segmentation Algorithms .

    - Enhancing Tracking Algorithms .

    - Building More interactive application .

  • Smart White Board Applications Using LCD Screen 2011

    66

    [Touchy Screen]

    [Chapter 12]

  • Smart White Board Applications Using LCD Screen 2011

    67

    Appendices

    Appendix A : User Manual

    Before operating the system check that

    - All webcams are attached to the machine and the software that operates

    each camera is installed on the system .

    - The LCD is connected to the machine .

    - The Touch cameras are placed in a proper position that views all the width

    and height of the screen .

    - Emgu CV and .net Framework 4 should be installed on the machine .

  • Smart White Board Applications Using LCD Screen 2011

    68

    Operating The System

    Steps for operating the system

    1. Run the program

    2. Enter your profile name and screen size and then click Done

    3. Click Run

    4. The green Markers form should appear

  • Smart White Board Applications Using LCD Screen 2011

    69

    5. Click on the tip of the screens to put the green markers then press ok

  • Smart White Board Applications Using LCD Screen 2011

    70

    6. Repeat this action again for the second green markers form

    7. Now the system will run , You can start by placing your hand in front of the gesture camera

    8. Make sure your hand is closed at first

    9. Start moving and trying different gestures to interact with the screen and using touch feature .

  • Smart White Board Applications Using LCD Screen 2011

    71

    Appendix B : Methods for determining optical flow

    Phase correlation

    Block Based Methods

    Differential methods of estimating optical flow

    Lucas-Kanade Method

    HornSchunck method

    BuxtonBuxton method BlackJepson method

    Lucas Kanade Method

    Lucas-Kanade method is a widely used differential method for optical flow

    estimation developed by Bruce D. Lucas and Takeo Kanade. It assumes that the

    flow is essentially constant in a local neighbourhood of the pixel under

    consideration, and solves the basic optical flow equations for all the pixels in that

    neighborhood , by the least squares criterion .

    By combining information from several nearby pixels, the Lucas-Kanade method

    can often resolve the inherent ambiguity of the optical flow equation. It is also

    less sensitive to image noise than point-wise methods. On the other hand, since it

    is a purely local method, it cannot provide flow information in the interior of

    uniform regions of the image.

    Concept

    The Lucas-Kanade method assumes that the displacement of the image contents between two nearby instants (frames) is small and approximately constant within a neighborhood of the point p under consideration. Thus the optical flow equation can be assumed to hold for all pixels within a window centered at p. Namely, the local image flow (velocity) vector (Vx,Vy) must satisfy

  • Smart White Board Applications Using LCD Screen 2011

    72

    Ix(q1)Vx + Iy(q1)Vy = It(q1) Ix(q2)Vx + Iy(q2)Vy = It(q2)

    Ix(qn)Vx + Iy(qn)Vy = It(qn)

    where are the pixels inside the window, and Ix(qi),Iy(qi),It(qi) are the partial derivatives of the image I with respect to position x, y and time t, evaluated at the point qi and at the current time.

    These equations can be written in matrix form Av = b, where

    This system has more equations than unknowns and thus it is usually over-determined. The Lucas-Kanade method obtains a compromise solution by the least squares principle. Namely, it solves the 22 system

    ATAv = ATb or v = (ATA) 1ATb

    where AT is the transpose of matrix A. That is, it computes

  • Smart White Board Applications Using LCD Screen 2011

    73

    with the sums running from i=1 to n.

    The matrix ATA is often called the structure tensor of the image at the point p.

    Weighted window

    The plain least squares solution above gives the same importance to all n pixels qi

    in the window. In practice it is usually better to give more weight to the pixels that

    are closer to the central pixel p. For that one uses the weighted version of the least

    squares equation

    ATWAv = A

    TWb

    or

    v = (ATWA)

    1A

    TWb

    where W is an nn diagonal matrix containing the weights Wii = wi to be assigned

    to the equation of pixel qi. That is, it computes

    The weight wi is usually set to a Gaussian function of the distance between qi and p.

  • Smart White Board Applications Using LCD Screen 2011

    74

    References

    [1]. Paul Viola and Michael Jones , Rapid Object Detection using a Boosted

    Cascade of Simple Features , Cambridge , USA , 2001

    [2]. Filipe Tomaz and Tiago Candeias and Hamid Shahbakzia , Improved

    Automatic Skin Detection in Color Images , Universidade do Algarve ,

    Campus de Gambelas , Portugal , 2000

    [3]. Baozhu Wang and Xiuying Chang , Cuixiang Liu , A Robust Method for

    Skin Detection and Segmentation , Hebei University of technology , Tianjin ,

    China , 2009

    [4]. Kai Briechle, Uwe D. Hanebeck, Template Matching using Fast Normalized Cross Correlation, Institute of Automatic Control Engineering,Technische Universitt Mnchen, 80290 Mnchen, Germany.

    [5]. J.P Lewis Fast, Normalized cross correlation.

    [6]. Rafeal C. Gonzalez, Richard E. Woods, DIGITAL IMAGE PROCESSING,

    Third edition.

    [7]. Gray Bradski, Adrian kaebler, Learning Open CV.

    [8]. Alan M. McIvor, Background Subtraction Techniques.

    [9]. Francesca Gasparini, Raimondo Schettini , Skin Segmentation using

    multiable thresholding,Milano Italy.

  • Smart White Board Applications Using LCD Screen 2011

    75

    [10]. Sushmita Mitra, Tinku Acharya , Gesture Recognition : A survey, May 2007.

    [11]. Mennat Allah Mostafa and Nada Sherif and Rana Mohamed and

    Sarah Ismail , Interactive wall , Ain Shams University , 2009.

    [12]. Abubakr Taha and Hadeel mahmoud and Hager AbdelMotaal and

    Mahmoud Fayz and yasmeen Abdelnaby , Interactive Screen , AinShams

    University , 2010 .