Trackingjs Face Tracking Library

Embed Size (px)

Citation preview

  • 8/10/2019 Trackingjs Face Tracking Library

    1/15

    11/21/2014 tracking.js | Documentation

    http://trackingjs.com/docs.html#web-components 1/15

    tracking.jsThis documentation will introduce you to most of the key concepts in working withtracking.js. Don't worry if you don't understand everything. Each of the concepts

    presented here is described in detail in the source code of the examples .

    Before you start: getting the project

    To get started, download the project . This project includes all of the tracking.js

    examples, source code dependencies you'll need to get started.

    Unzip the project somewhere on your local drive. The package includes an initial

    version of the project you'll be working with. While you're working, you'll need a

    basic HTTP server to serve your pages. Test out the web server by loading the

    finished version of the project. For example: http://localhost:8000/tracking.js/

    Step 1: Creating an example file

    In this step, you'll create an example file under the examples/ folder into where you

    unziped the project under your local drive. Go to this directory and create a file

    called first_tracking .html file in your favorite editor. The starting file looks like this:

    tracking.js - first tracking

    // Start tracking here...

    http://localhost:8000/tracking.js/https://github.com/eduardolundgren/tracking.js/archive/master.ziphttps://github.com/eduardolundgren/tracking.js/tree/master/examples
  • 8/10/2019 Trackingjs Face Tracking Library

    2/15

    11/21/2014 tracking.js | Documentation

    http://trackingjs.com/docs.html#web-components 2/15

    Step 2: Choose what you want to play with

    Now that you have first_tracking .html example file created, it's time to choose

    what technique you want to see in action. There are several examples available on

    this page, the first one, the Tracker , is just an abstract class to base of the other

    tracking techniques and cannot be instantiated. One good option to start with is

    the ColorTracker , copy the snippets available on this section and paste into your

    example file, in the end it should look something like:

    tracking.js - first tracking

    var colors = new tracking.ColorTracker([ 'magenta' , 'cyan' , 'yellow' ]);

    colors.on( 'track' , function (event) {

    if (event.data.length === 0) {

    // No colors were detected in this frame.

    } else {

    event.data.forEach( function (rect) {

    console.log(rect.x, rect.y, rect.height, rect.width, rect.color);

    });

    }

    });

    http://trackingjs.com/docs.html#trackershttp://trackingjs.com/docs.html#trackers
  • 8/10/2019 Trackingjs Face Tracking Library

    3/15

    11/21/2014 tracking.js | Documentation

    http://trackingjs.com/docs.html#web-components 3/15

    tracking.track( '#myVideo' , colors);

    This example will request your camera and track magenta, cyan and yellow colorsthat appear in front of it. Look around you and grab any object that matches with

    one of those colors and watch the console of your browser, it should display the

    coordinates of all found objects.

    Trackers

    In order to understand how the tracker API works, first you need to instantiate the

    constructor passing the targets you want to detect. Note that tracking .Tracker is an

    abstract class only used to teach how to use the API.

    var myTracker = new tracking.Tracker( 'target' );

    Once you have the tracker instance, you need to know when something happens,

    that's why you need listen for track events:

    myTracker.on( 'track' , function (event) {

    if (event.data.length === 0) {

    // No targets were detected in this frame.

    } else {

    event.data. forEach ( function (data) {

    // Plots the detected targets here.

    });

    }

    });

    Now that you have the tracker instance listening for track event, you are ready to

    start tracking by invoking the track

    implementation myTracker.track(pixels, width,height) . This method handles all

  • 8/10/2019 Trackingjs Face Tracking Library

    4/15

    11/21/2014 tracking.js | Documentation

    http://trackingjs.com/docs.html#web-components 4/15

    the internal logic that processes the pixels and extracts the targets from it.

    But don't worry, you don't need to read the , or pixels

    manually, tracking.js provides an utility which handles that for you:

    var trackerTask = tracking.track( '#myVideo' , myTracker);

    It's also possible to plug the tracker instance in other elements. When tracking

    a or , the utility tracking. track ( '#image' , myTracker) invokes only

    one time myTracker.track(pixels, width,height) . All the required arguments are

    fulfilled automatically, e.g. array of pixels, width and height. When using with

    a node it is a little bit different, for each video frame the internal track

    implementation is executed.

    If you want to have full control of the tracking task you've plugged on the previous

    example, you may want to continue reading this section. Let's assume you need to

    stop the tracking from a long-running video:

    trackerTask.stop(); // Stops the tracking

    trackerTask.run(); // Runs it again anytime

    The previous example was an abstract overview about the tracker API available.

    Now let's dig into some practical usages of some of the available trackers.

    Color Tracker

    Colors are everywhere in every single object. Being able to handle colored objects

    to control your browser through the camera is very appealing. For that reason,

    tracking.js implemented a basic color tracking algorithm that resulted in a real-time

    frame rate through a simple and intuitive API. It offers several significant

    advantages over geometric cues such as computational simplicity, robustness

    under partial occlusion and illumination, rotation, scale and resolution changes.

    In order to use a color tracker, you need to instantiate the constructor passing the

    colors to detect:

    var colors = new tracking.ColorTracker([ 'magenta' , 'cyan' , 'yellow' ]);

  • 8/10/2019 Trackingjs Face Tracking Library

    5/15

    11/21/2014 tracking.js | Documentation

    http://trackingjs.com/docs.html#web-components 5/15

    Once you have the color tracker instance, you need to know when something

    happens, that's why you need listen for track events:

    colors.on( 'track' , function (event) {

    if (event.data.length === 0) { // No colors were detected in this frame.

    } else {

    event.data. forEach ( function (rect) {

    // rect.x, rect.y, rect.height, rect.width, rect.color

    });

    }

    });

    Now that you have the tracker instance listening for track event, you are ready to

    start tracking:

    tracking. track ( '#myVideo' , colors);

    How do I register my own color? By default tracking.js color tracker provides out of the box three default colors, they are: magenta, cyan and yellow. In addition to

    those, you can register any custom color you want to track, it's very simple, let's

    assume the color you want to track is green. In the RGB color space the green

    color could be some value close to (r, g, b) = (0, 255, 0) , where ( r , g,

    b) stands for red, green and blue, respectively. Once you understand the color to

    track in the RGB color space, it's time to register your color

    using tracking .ColorTracker.registerColor .

    tracking.ColorTracker.registerColor( 'green' , function (r, g, b) {

    if (r < 50 && g > 200 && b < 50) {

    return true ;

    }

    return false ;

    });

    Note that the custom color function returns true to any value that the gvalue is

    close to 255, to make sure we exclude other colors that could fit with the green

  • 8/10/2019 Trackingjs Face Tracking Library

    6/15

    11/21/2014 tracking.js | Documentation

    http://trackingjs.com/docs.html#web-components 6/15

    RGB pattern, it also checks r and b values to make sure they are below 50 , hence

    close to (r, g, b) = (0, 255, 0) .

    Live demo >

    Object Tracker

    Having a rapid object detection as part of the library resulted in interesting

    examples for web applications, such as detecting faces, mouths, eyes and any

    other training data that could be added to the library later.

    In addition to the tracking.js core script, there are some training classifiers, they are

    going to teach tracking.js core how to recognize the object you want to track, make

    sure to only include the ones you need, each of them have an average size of ~60

    KB:

    In order to use object tracker, you need to instantiate the constructor passing the

    classifier data to detect:

    var objects = new tracking.ObjectTracker([ 'face' , 'eye' , 'mouth' ]);

    Once you have the object tracker instance, you need to know when something

    happens, that's why you need listen for track events:

    objects.on( 'track' , function (event) {

    if (event.data.length === 0) {

    // No objects were detected in this frame.

    } else {

    event.data. forEach ( function (rect) {

    // rect.x, rect.y, rect.height, rect.width

    });

    }

    });

    http://trackingjs.com/examples/color_hello_world.htmlhttp://trackingjs.com/api/tracking.ColorTracker.html
  • 8/10/2019 Trackingjs Face Tracking Library

    7/15

    11/21/2014 tracking.js | Documentation

    http://trackingjs.com/docs.html#web-components 7/15

    Now that you have the tracker instance listening for track event, you are ready to

    start tracking:

    tracking. track ( '#myVideo' , objects);

    Live demo >

    Custom Tracker

    It's easy to create your own tracker whenever you need one.

    Let's say for example that, for some reason, you need to build an application that

    finds shadows in images. Our trackers currently don't support this use case yet, so

    you'll need to implement the algorithm yourself.

    Don't walk away yet though! You have the option of building your feature on top of

    tracking.js and, if you do so, you'll be able to take advantage of all the abstractions

    it provides, like accessing the camera and getting the pixel matrix through the

    canvas on every frame.

    It's simple! First, you just need to create a constructor for your new tracker (let's call

    it MyTracker ) and have it inherit from tracking .Tracker :

    var MyTracker = function () {

    MyTracker( this , 'constructor' );

    }

    tracking.inherits(MyTracker, tracking.Tracker);

    Then, you need to implement the track method for your tracker. It will receive thepixel matrix for the current image (or video frame) and should hold the actual

    tracking algorithm. When the tracking is done, the code should call the emit method

    to send the results through the track event:

    var MyTracker = function () {

    MyTracker.prototype.track = function (pixels, width, height) {

    // Your code here

    this .emit( 'track' , {

    http://trackingjs.com/examples/face_hello_world.htmlhttp://trackingjs.com/api/tracking.ObjectTracker.html
  • 8/10/2019 Trackingjs Face Tracking Library

    8/15

    11/21/2014 tracking.js | Documentation

    http://trackingjs.com/docs.html#web-components 8/15

    // Your results here

    });

    }

    }

    That's it! You can now use your tracker in the same way the other existing trackersare used. First, create an instance of it:

    var myTracker = new tracking.MyTracker();

    Then, listen to it's track events:

    myTracker. on( 'track' , function ( event ) { // Results are inside the event

    });

    And finally, start tracking:

    tracking. track ( '#myVideo' , myTracker);

    Utilities

    For a better understanding of the library architecture, the implementation is divided

    in several utilities, it also includes several computer vision algorithms to help you

    implement your custom solutions. To develop computer vision applications using

    only raw JavaScript APIs could be too verbose and complex, e.g. capturing users'

    camera and reading its array of pixels.

    The big amount of steps required for a simple task makes web developers life hardwhen the goal is to achieve complex implementations. Some level of encapsulation

    is needed in order to simplify development. The proposed library provides

    encapsulation for common tasks on the web platform.

    http://trackingjs.com/api/tracking.Tracker.html
  • 8/10/2019 Trackingjs Face Tracking Library

    9/15

    11/21/2014 tracking.js | Documentation

    http://trackingjs.com/docs.html#web-components 9/15

    Feature Detection (Fast)

    Provides an implementation of Features from Accelerated Segment Test for

    features detection. In other words it finds corners on parts of the image. Fast is

    faster than many other well-known feature extraction methods.

    To find corners, tracking.js provides the following utility:

    var corners = tracking .Fast.findCorners (pixels, width, height) ;

    Live demo >

    Feature Descriptor (Brief)

    Provides an implementation of Binary Robust Independent Elementary Features . It

    uses binary strings as an efficient feature point descriptor. As a result, Brief is very

    fast both to build and to match, perfect for the web.

    Once you have extracted image features, in our previous example the features

    were the image corners, you can describe each of them:

    var descriptors1 = tracking .Brief.getDescriptors (pixels, width,

    corners1) ;

    var descriptors2 = tracking .Brief.getDescriptors (pixels, width,

    corners2) ;

    Brief also provides a method that you can match the features decribed

    indescriptors1 and descriptors2 :

    var matches = tracking .Brief.reciprocalMatch (corners1, descriptors1,

    corners2, descriptors2) ;

    Live demo >

    Convolution

    http://cvlabwww.epfl.ch/~lepetit/papers/calonder_eccv10.pdfhttp://en.wikipedia.org/wiki/Features_from_accelerated_segment_testhttp://trackingjs.com/examples/brief.htmlhttp://trackingjs.com/api/tracking.Brief.htmlhttp://trackingjs.com/examples/fast.htmlhttp://trackingjs.com/api/tracking.Fast.html
  • 8/10/2019 Trackingjs Face Tracking Library

    10/15

    11/21/2014 tracking.js | Documentation

    http://trackingjs.com/docs.html#web-components 10/15

    Convolution filters are very useful generic filters for image processing. The basic

    idea is that you take the weighed sum of a rectangle of pixels from the source

    image and use that as the output value. Convolution filters can be used for blurring,

    sharpening, embossing, edge detection and a whole bunch of other things.

    In order to horizontally convolve image pixels you can do:

    tracking .Image.horizontalConvolve (pixels, width, height, weightsVector,

    opaque) ;

    In order to vertically convolve image pixels you can do:

    tracking .Image.verticalConvolve (pixels, width, height, weightsVector,

    opaque) ;

    Or, if you need to do a separable convolve:

    tracking .Image.separableConvolve (pixels, width, height, horizWeights,

    vertWeights, opaque) ;

    Gray Scale

    Converts a color from a colorspace based on an RGB color model to a grayscale

    representation of its luminance. The coefficients represent the measured intensity

    perception of typical trichromat humans, in particular, human vision is most

    sensitive to green and least sensitive to blue.

    To convert the images pixels into grayscale:

    tracking .Image.grayscale (pixels, width, height, fillRGBA) ;

    Image Blur

    A Gaussian blur (also known as Gaussian smoothing) is the result of blurring an

    http://trackingjs.com/api/tracking.Image.html#grayscalehttp://trackingjs.com/api/tracking.Image.html#horizontalConvolve
  • 8/10/2019 Trackingjs Face Tracking Library

    11/15

    11/21/2014 tracking.js | Documentation

    http://trackingjs.com/docs.html#web-components 11/15

    image by a Gaussian function. It is a widely used effect in graphics software,

    typically to reduce image noise and reduce detail. Gaussian smoothing is also used

    as a pre-processing stage in computer vision algorithms in order to enhance image

    structures at different scales.

    To blur the images pixels using tracking.js you can do:

    tracking .Image.blur (pixels, width, height, diameter) ;

    Integral Image

    A summed area table is a data structure and algorithm for quickly and efficientlygenerating the sum of values in a rectangular subset of a grid. In the image

    processing domain, it is also known as an integral image.

    To compute the images pixels using tracking.js you can do:

    tracking .Image.computeIntegralImage (

    pixels, width, height, opt_integralImage, opt_integralImageSquare,

    opt_tiltedIntegralImage, opt_integralImageSobel) ;

    Sobel

    Computes the vertical and horizontal gradients of the image and combines the

    computed images to find edges in the image. The way we implement the Sobel

    filter here is by first grayscaling the image, then taking the horizontal and vertical

    gradients and finally combining the gradient images to make up the final image.

    To compute the edges of the image pixels using tracking.js you can do:

    tracking .Image.sobel (pixels, width, height) ;

    http://trackingjs.com/api/tracking.Image.html#sobelhttp://trackingjs.com/api/tracking.Image.html#computeIntegralImagehttp://trackingjs.com/api/tracking.Image.html#blur
  • 8/10/2019 Trackingjs Face Tracking Library

    12/15

    11/21/2014 tracking.js | Documentation

    http://trackingjs.com/docs.html#web-components 12/15

    Viola Jones

    The ViolaJones object detection framework is the first object detection framework

    to provide competitive object detection rates in real-time. This techinique is used

    inside tracking .ObjectTracker implementation.

    To use Viola Jones to detect an object of an image pixels using tracking.js you cando:

    tracking .ViolaJones.detect (pixels, width, height, initialScale,

    scaleFactor, stepSize, edgesDensity, classifier) ;

    Web Components

    Many of the existing computer vision frameworks are not available on the web, in

    addition, they are too complex to learn and to use. The main goal of tracking.js is toprovide those complex techniques in a simple and intuitive way on the web. We

    believe computer vision is important to improve people's life, bringing it to the web

    will make this future a reality a lot faster.

    We also believe that Web Components are the future of encapsulation on the web,

    therefore tracking.js library features are available for you as custom elements on

    the tracking-elements repository.

    Can you imagine tagging your friend's face in a picture with one line of HTML? Or,

    tracking a user's face with the same API? This section will show how you can do

    that. This will require Bower a front end package manager. Once you have bower

    installed, install tracking-elements:

    $ bower install tracking - elements -- save

    After install tracking-elements few custom elements are available. They extends the

    native , and with tracking functionality.

    http://bower.io/https://github.com/eduardolundgren/tracking-elementshttp://webcomponents.org/http://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_frameworkhttp://trackingjs.com/api/tracking.ViolaJones.html
  • 8/10/2019 Trackingjs Face Tracking Library

    13/15

    11/21/2014 tracking.js | Documentation

    http://trackingjs.com/docs.html#web-components 13/15

    Color Element

    As a first step of using tracking.js web components you need to learn how to

    extends a native DOM element with tracking functionality using the attribute is ="" .

    The tracking target is set through target= "" attribute and accepts different values

    depending on the tracker you are using, e.g. colors or objects.

    Elements extending could request the user's camera using the

    attribute camera= "true" . Note that passing that the browser will request the user to

    allow their camera to be shared. The custom elements exposes events and

    methods from Tracker , for more information go to the API docs . The next example

    will cover an example how to tag friends faces on a picture using ObjectTracker .

    Live demo >

    Object Element

    Let's create an example that you can place an image with your friends faces and

    mark with a rectangle each of them. In this step, you'll create an example file under

    the examples/ folder into where you unziped the project under your local drive. Go to

    this directory and create a file called tracking_element .html file in your favorite

    editor. The starting file looks like this:

    http://trackingjs.com/api/http://eduardolundgren.github.io/tracking-elements/examples/color.htmlhttps://github.com/eduardolundgren/tracking-elements#color-tracker
  • 8/10/2019 Trackingjs Face Tracking Library

    14/15

    11/21/2014 tracking.js | Documentation

    http://trackingjs.com/docs.html#web-components 14/15

    // Plots rectangles here.

    The next step will teach you how to plot rectangles on your friends faces, you can

    listen for track events direct from your DOM element,

    e.g. img.addEventListener( 'track' , doSomething) . The event fires when all faces are

    found on the image. The event payload ( event . data ) is an array of objects

    containing all the faces coordinates. Now just pass them to the helper

    function plotRectangles to plot each face.

    var img = document.querySelector( 'img' );

    // Fires when faces are found on the image.

    img.addEventListener( 'track' , function (event) {

    event.detail.data.forEach( function (rect) {

    plotRectangle(img, rect);

    });

    });

    function plotRectangle (el, rect) {

    var div = document.createElement( 'div' );

  • 8/10/2019 Trackingjs Face Tracking Library

    15/15

    11/21/2014 tracking.js | Documentation

    div.style.position = 'absolute' ;

    div.style.border = '2px solid ' + (rect.color || 'magenta' );

    div.style.width = rect.width + 'px' ;

    div.style.height = rect.height + 'px' ;

    div.style.left = el.offsetLeft + rect.x + 'px' ;

    div.style.top = el.offsetTop + rect.y + 'px' ;

    document.body.appendChild(div);

    return div;

    }