60
Design and Implementation of SmartLab Infrastructure GRADUATE PROJECT REPORT Submitted to the Faculty of the Department of Computing Sciences Texas A&M University-Corpus Christi Corpus Christi, Texas In Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Science By Vinay Datta Pinnaka Summer 2017 Committee Members Dr. Scott A. King Committee Chairperson Dr.Ajay K. Katangur Committee Member

Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

Design and Implementation ofSmartLab Infrastructure

GRADUATE PROJECT REPORT

Submitted to the Faculty ofthe Department of Computing SciencesTexas A&M University-Corpus Christi

Corpus Christi, Texas

In Partial Fulfillment of the Requirements for the Degree ofMaster of Science in Computer Science

By

Vinay Datta PinnakaSummer 2017

Committee Members

Dr. Scott A. KingCommittee Chairperson

Dr.Ajay K. KatangurCommittee Member

Page 2: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

ii

ABSTRACT

Human efforts to interact with the digital world are continuously increasing

ever since the invention of the first computer. Advancements in embedded devices,

sensor technologies and distributed computing have driven forward research of smart

environments. Recently, the usage of smart environments has become popular in or-

der to make everyday living more comfortable and to improve the quality of life. This

work is concerned with constructing low cost infrastructure for a smart environment

which is proposed for implementation in Pixel Island Lab over time. The SmartLab

system collects depth, RGB and infrared data from a Kinect sensor by using Single

Board Computers and stream reduced data over the network. A novel client-server

model based on TCP protocol suite is proposed for transferring 3D data from the

Single Board server to client. The complete software and hardware infrastructure

designed and implemented in this project will help the application designers build

intelligent services and applications on top of SmartLab more easily.

Page 3: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

iii

TABLE OF CONTENTS

CHAPTER Page

ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii

TABLE OF CONTENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

1 BACKGROUND AND RATIONALE . . . . . . . . . . . . . . . . . . . . 1

1.1 Single Board Computers(SBC) . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Microsoft Kinect 360 . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3 Computer Vision for SmartLab . . . . . . . . . . . . . . . . . . . . . 4

1.4 Network Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.5 Pixel Island lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.6 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 PROJECT OVERVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.1 Project objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 SYSTEM DESIGN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.1 Hardware and Software requirements . . . . . . . . . . . . . . . . . . 11

3.2 Data collection phase . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3.3 Network phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.4 Centralized client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

4 IMPLEMENTATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4.1 Server Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4.1.1 Kinect Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.1.2 TCP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4.1.3 Server Data structure . . . . . . . . . . . . . . . . . . . . . . . 21

4.1.4 Server Configuration . . . . . . . . . . . . . . . . . . . . . . . . 22

Page 4: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

iv

CHAPTER Page

4.2 Client Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

4.2.1 TCP Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4.2.2 Client Data structure . . . . . . . . . . . . . . . . . . . . . . . 25

4.3 Background Subtraction . . . . . . . . . . . . . . . . . . . . . . . . . 25

4.4 Data Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

5 RESULTS AND EVALUATIONS . . . . . . . . . . . . . . . . . . . . . . 27

5.1 Smartlab Hardware setup . . . . . . . . . . . . . . . . . . . . . . . . 27

5.2 Smartlab system usage . . . . . . . . . . . . . . . . . . . . . . . . . . 28

5.3 Smartlab software results . . . . . . . . . . . . . . . . . . . . . . . . . 29

5.3.1 Depth data stream . . . . . . . . . . . . . . . . . . . . . . . . . 30

5.3.2 Color data stream . . . . . . . . . . . . . . . . . . . . . . . . . 33

5.3.3 Depth and Color data combination . . . . . . . . . . . . . . . . 36

5.3.4 Factors influencing SmartLab software performance . . . . . . . 38

6 CONCLUSION AND FUTURE WORK . . . . . . . . . . . . . . . . . . . 40

6.1 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

APPENDIX A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

7.1 Environment for SmartLab server (Linux Environment- ODROID XU4) 46

7.1.1 Ubuntu Image . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

7.1.2 Installing OpenCV . . . . . . . . . . . . . . . . . . . . . . . . . 47

7.1.3 Installing OPENNI . . . . . . . . . . . . . . . . . . . . . . . . 48

7.1.4 Installing Libfreenect . . . . . . . . . . . . . . . . . . . . . . . 49

7.1.5 Installing Boost . . . . . . . . . . . . . . . . . . . . . . . . . . 50

7.2 Environment for Smartlab server (Windows Environment-Lattepanda) 50

7.3 Building SmartLab server in Linux . . . . . . . . . . . . . . . . . . . 50

7.4 Building SmartLab server in Windows . . . . . . . . . . . . . . . . . 51

7.5 Running SmartLab server in Linux . . . . . . . . . . . . . . . . . . . 52

7.6 Running SmartLab server in Windows . . . . . . . . . . . . . . . . . 52

7.7 SmartLab client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

7.8 Running SmartLab client . . . . . . . . . . . . . . . . . . . . . . . . . 53

Page 5: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

v

LIST OF TABLES

TABLE Page

1 Benchmark analysis of three Kinect libraries . . . . . . . . . . . . . . 18

2 Data capture and receiving rate without compression on Odroid

and Lattepanda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3 Network throughput with out compression . . . . . . . . . . . . . . . 39

4 Data capture and receiving rate with compression on Odroid and

Lattepanda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

5 Network throughput with compression . . . . . . . . . . . . . . . . . 39

Page 6: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

vi

LIST OF FIGURES

FIGURE Page

1 Kinect V1 hardware configuration and images from RGB and

depth camera. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 SmartLab abstract architecture. . . . . . . . . . . . . . . . . . . . . . 10

3 Detailed architecture of data collection phase. . . . . . . . . . . . . . 13

4 Network interconnection for SmartLab. . . . . . . . . . . . . . . . . . 14

5 Centralized client module. . . . . . . . . . . . . . . . . . . . . . . . . 15

6 Class diagram of server module . . . . . . . . . . . . . . . . . . . . . 16

7 Class diagram of client module . . . . . . . . . . . . . . . . . . . . . 24

8 Arrangement of Microsoft Kinects in Pixel Island . . . . . . . . . . . 27

9 Microsoft Kinect connected to SBC. . . . . . . . . . . . . . . . . . . 28

10 Face detection application using SmartLab . . . . . . . . . . . . . . . 29

11 Depth stream output without data compresssion . . . . . . . . . . . 32

12 depth stream result with data compresssion . . . . . . . . . . . . . . 33

13 Color stream output without data compression . . . . . . . . . . . . 34

14 color stream output without data compression . . . . . . . . . . . . . 35

15 depth and color image streams output without compression . . . . . 36

16 network statistics for depth and color stream collected from two

odroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

17 network statistics for depth and color stream collected from single

Lattepanda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

Page 7: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

1

CHAPTER 1

BACKGROUND AND RATIONALE

Smart environment is defined as a physical world that is equipped with sen-

sors, actuators, embedded devices and interconnected seamlessly through a network.

Recently, smart environment has become prominent research topic since it plays

a key role in predicting health, lifestyle and wellbeing of humans. Due to its ap-

plication in various fields this research received several transformations in terms of

hardware utilization as well as software practices. The concept of smart environments

evolved from the term ubiquitous computing according to Mark Weiser. Ubiquitous

computing can occur on any device, this type of computing utilizes the supporting

technologies such as embedded hardware, internet protocols and sensors. The focus

of SmartLab is to develop smart environment with low cost infrastructure.

SmartLab is an interactive lab environment which can identify individuals and

monitor activities. SmartLab Infrastructure Project(SLIP) is focused on building a

low cost infrastructure to acquire a variety of data including RGB, depth and infrared

from motion sensing sensor like Kinect [1], placed on the ceiling and walls of Pixel

Island Lab at Texas A&M University-Corpus Christi using Single Board Comput-

ers(SBC). Availability of low-cost motion sensing sensors such as Microsoft Kinect

360 [1] reinforced the SmartLab research. Human-computer interaction became more

amicable with the advent of multi sensor devices like Kinect. However, interaction

with Kinect to obtain data is still in its naive stage. Particularly interacting with

multiple sensors simultaneously is a complex hardware limitation as mentioned by

Mario Mart and Miguel Pedraza [2]. There were few researchers who addressed this

problem and developed a system for it. One such system that we came across is

Page 8: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

2

constructed as a distributed system that uses TCP protocol to stream the data from

the server to a client [2]. But the system still uses powerful desktop system which

are expensive and contradicts the idea of inexpensive infrastructure for smartlab.

In this work, the focus is on developing a distributed system that is economically

subtle and easy to setup. We have chosen cost effective yet powerful Single Board

Computers(SBC’s) for smartlab. Later sections talk more about the SBC’s and their

applications. In addition to these tiny computers, we have chosen a well-known con-

sumer camera, Microsoft Kinect xbox 360 [1]. These two in combination provide a

reasonable system for smartlab.

1.1 Single Board Computers(SBC)

A single-board computer (SBC) is a complete computer built on a single cir-

cuit board, with microprocessor(s), memory, input/output (I/O) and other features

required for a functional computer. Compared with desktop computers SBC’s are

compact and consume less power. Moreover, SBC’s are significantly cheaper than

PC’s and also powerful enough to perform solitary tasks, in turn they are good re-

placement to PC’s where multitasking is not a priority. For example, Odrioid-XU4

a $70 Single Board Computer has processor speed of 2GHz where as the first gen-

eration Pentium Quad Code desktop has 2.16GHz processor speed which is almost

comparable. Applications of SBC’s are typically home automation, robot control,

media players, and personal cloud storage servers etc. Among all the SBC’s available

in market Raspberry Pi is famous for its low cost and ease. In addition to Raspberry

Pi, other boards like Odroid, Banana Pi, Orange Pi, Cubieboard and recent windows

10 board Lattepanda are much more powerful and have diverse applications. The

choice of SBC for SmartLab is made by considering three features; cost, ethernet

Page 9: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

3

specifications, and processing power. For this project the ability of different SBC’s

like Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the

three boards perform satisfactorily, but for Raspberry pi, we have to compromise

with lesser video frame rate.

1.2 Microsoft Kinect 360

To acquire multiple data streams such as RGB, depth and infrared, there

are a number of devices available such as stereo cameras, time of flight cameras and

consumer motion sensing cameras such as Asus Xtion [3], Microsoft Kinect [1], Struc-

ture [4] sensors. Amongst all these motion sensing devices available in the market

Microsoft Kinect stands out for its reasonable cost and satisfactory frame rates and

resolution. Kinect 360 also called Kinect for Xbox 360 is a combination of Microsoft

built software and hardware. Kinect 360 hardware includes a range of chipset tech-

nology, which consists of an infrared projector, camera and a special microchip that

generates a grid from which the location of a nearby object in three dimensions can

be ascertained. Though the Kinect is released for games, the usability of the sensor

in the computer vision field makes it popular among the research groups. Presently,

there are several software tools available to interact with the Kinect, among them

the popular ones are Microsoft official SDK, OpenNI, and LibFreenect. Microsoft

SDK is windows only and the remaining two software’s are multi-platform. A de-

tailed comparison of these software’s are presented by Han, Jungong and Shao, Ling

[5]. These software provide access to the RGB and depth and infrared sensors of the

Kinect, Figure1 [5] shows the hardware configuration of Microsoft Kinect 360 and

sample images from RGB and depth sensors. OpenNI works along with a middleware

called NiTE to provide high-level features namely hand tracking and human skeletal

Page 10: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

4

joints. The wide applications of Kinect in Computer vision and its ability to track

human skeleton are the primary reasons to choose it as data collection sensor.

Figure 1: Kinect V1 hardware configuration and images from RGB and depth

camera.

1.3 Computer Vision for SmartLab

Computer vision is an interdisciplinary field that deals with how computers

can be used to gain high-level understanding from digital images or videos. Computer

vision tasks include methods for acquiring, processing, analyzing and understanding

digital images, and in general, deal with the extraction of high-dimensional data

from the real world in order to produce numerical or symbolic information. When

streaming data from the static camera, we face the problem of sending the back-

ground data multiple times when is not changing. To avoid this problem and to

extract only sensible data, background subtraction is used. In this technique the

image’s foreground is extracted for further processing [6]. Background subtraction is

a widely used approach for detecting moving objects in videos from static cameras

[7]. The rationale for the approach is to detect moving objects from the difference

between the current frame and a reference frame, often called as ”background im-

age”, or ”background model”. Each server application employs this technique and

Page 11: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

5

sends only essential data to the client. By employing this technique the network

bandwidth utilized by each server is reduced.

1.4 Network Interface

A novel TCP/IP based client-server paradigm is developed using low-level

sockets for devising a sophisticated network interface for the SmartLab. Sockets al-

low network software applications to communicate using standard mechanisms built

into network hardware and operating systems. TCP/IP (Transmission Control Pro-

tocol/Internet Protocol) is the basic Internet communication protocol. TCP/IP is a

four-layer protocol, the higher layer is Transmission Control Protocol that manages

assembling of a message into smaller packets that are transmitted over the Internet.

The lower layer is called as Internet Protocol that handles the address part of each

packet. In the SmartLab application, the data streamed over the network is crucial,

as the data is used for critical application such as face recognition, object detection

etc. on the client side. TCP is selected for its reliable, ordered, and error-checked

delivery of a stream of bytes. Moreover, network interface implemented via sockets

does not have the programming language barrier and provides low-level access to the

network. So, we can easily modify the application layer over it.

1.5 Pixel Island lab

Pixel Island Lab is the research facility available at Texas A&M University

- Corpus Christi, the mission of Pixel Island is to conduct research in computer

graphics and visualization, to be a source of expertise in these areas for the campus,

and to educate students in these areas. Pixel Island currently has two major focus

areas: face-to-face human-computer interaction and visualizing scientific data.

Page 12: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

6

1.6 Related work

Implementation and selection of sensors differentiate many smart environ-

ments from one another. Study for smart environments recommended three foremost

research works. Firstly, Aware Home Research Initiative (AHRI) [8] at Georgia In-

stitute of Technology is a interdisciplinary research project for analyzing health and

well-being, providing entertainment and child study. Secondly, MIT Media Lab has

a research project named House n [9], this project provides the environment for sci-

entific study. Finally, MavHome [10], the smart home research project developed by

University of Texas at Arlington, this project involved building a intelligent agent

that maximizes the comfort of inhabitants. Alongside the research work done by

Maurizio Caon and Yong Yue [11] talks about context aware gesture interactions

based on multiple Kinects. Moreover, all these systems acts as an inspiration to

implement a personalized smart environment in Pixel island lab.

Page 13: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

7

CHAPTER 2

PROJECT OVERVIEW

2.1 Project objective

The goal of SmartLab Infrastructure Project is to build low cost infrastructure

for extracting 3D data from multiple Microsoft Kinects. This system also focuses

on transferring the data over the network effectively. The use of this extensive

data collection setup helps researchers easily set up monitoring system that can be

employed in useful applications, such as those related to security and surveillance,

human behavior analysis and patient health care.

In this project we setup multiple Kinects on the walls and ceiling of our pixel

island lab. In general, a personal computer is required to interact with each Kinect,

but in our system, in opposite to this general approach, we utilized Single Board

Computers that are hooked with Kinect. Server module running on SBC’s obtain

RGB, depth and human skeleton information from Microsoft Kinect. The server

application performs Gaussian background subtraction [6] on RGB and creates a

mask to obtain only sensible data that changes from frame to frame. Apart from

data collection and understanding, the server module also streams the data over

the network with a novel protocol built on top of TCP/IP. On the other hand, the

client module is responsible for interacting with multiple servers simultaneously and

collecting the data obtained over the network. Client module process the data and

reconstructs it to obtain the original RGB, depth and infrared images. Moreover, the

huge data obtained on the client side is utilized for human monitoring applications.

Page 14: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

8

2.2 Requirements

The main focus of this project as stated in the earlier section, is building the

infrastructure, to interact with multiple Kinects. More specifically, this project will

enable researchers to develop futuristic application for human monitoring. Addition-

ally, the following are the required characteristics of the SmartLab Project.

• Inexpensive: Building a multiple data collection system is never inexpen-

sive but, in order to achieve this goal, the low cost motion sensor Microsoft

Kinect 360 is selected. Dealing with multi-array data sensors require power-

ful computers which is not an economic solution. Research on various Single

Board Computers showed that SBC’s can work with Microsoft Kinect effec-

tively. SBC’s make a viable solution to cut off the costs. Overall the hardware

selected for building SmartLab is economical and also unprecedented.

• Portability: While selecting SBC’s we came across several effective boards

which have varying operating software. Specifically, to mention Raspberry Pi

and Odroid are compatible with running Ubuntu mate 16.04 and Lattepanda

is able to run only windows 10. SmartLab application is intended to run on

any kind of SBC, hence the development of server application should be done

in such a way that the application is highly portable with minor changes.

• Incorporate: SmartLab Infrastructure Project can be incorporated into many

applications such as Surveillance system, clinical applications, gait analysis,

and fall detection. To make this application generalized to all other fields

the design and development are planned accordingly. Data obtained from the

SmartLab, RGB and depth and infrared are collected as simple data format.

This allows researchers to comfortably combine SmartLab to any functional

Page 15: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

9

module.

Page 16: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

10

CHAPTER 3

SYSTEM DESIGN

This chapter illustrates the functional phases of SmartLab system. Each block

contains different software components that are executed sequentially. Figure 2 show

the abstract flow diagram of the SmartLab system. Section 3.1 lists the hardware

and software requirements for SmartLab system.

Figure 2: SmartLab abstract architecture.

In the fallowing sections, details about the three phases of the SmartLab

are discussed. The first phase deals with data collection from Microsoft Kinect,

this phase is responsible for collecting RGB, depth, infrared and human skeleton

joints(only with Lattepanda) at 30 frames per second. The second phase is network

Page 17: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

11

phase where data is transmitted to the client. The last phase is a centralized client,

data collected over the network is processed and displayed to the user.

3.1 Hardware and Software requirements

• Microsoft Kinect 360

– RGB video camera

– Depth sensor

– Infrared camera

• Single Board Computers(SBC’s)

– Lattepanda [12]

– Ordroid XU4 [13]

– Raspberry Pi

• OpenNI 2 and NiTe 2

– OpenNI 2 - Open Natural Interaction [14]

– NiTE 2 - Natural Interaction Technology for End-user [15]

3.2 Data collection phase

In this phase, Single Board Computers interact with Microsoft Kinect and

obtain RGB, Infrared, depth and human skeletal joints information. Figure 3 shows

each component of data collection phase. In order to communicate with the Kinect, a

range of software tools is available namely, Libfreenect, OpenNI and official Microsoft

Kinect SDK[18]. OpenNI is multiplatform and open source tool which made it a

Page 18: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

12

preferable Kinect software tool. OpenNI works along with compliment middleware

NiTE and provides human skeletal data along with RGB and depth streams. Kinect

provides 640x480 pixels 8-bit RGB image and also 11-bit monochrome depth image

with the same resolution at a default frame rate of 30Hz. Typically, NiTE also

processes the human joints at the same rate. The size of the data generated by

Kinect per second by default is provided in the equation 3.1

Total size of a data generated per second = [ (640 * 480) * 2 bytes depth+

(640 * 480) * 3 bytes RGB+ 4 * 3 * 15 Joints] * 30 fps = 43.95 Mbytes per second

(3.1)

Theoretically, the size of the data generated by Kinect is 43.95 MB per second. This

summarizes that Kinect generates a lot of data. It is not a good idea to send this

huge data over the network because most of the collected data is redundant. Firstly,

after data collection the important step is to apply data reduction techniques based

on background subtraction. MOG2 is a sophisticated background and foreground

separator algorithm proposed by Z.Zivkovic [6]. MOG2 extracts a foreground mask

by learning from the video stream. MOG2 background subtractor works indefinitely

for the RGB image. Human skeleton joints are insignificant in size compared to

depth and RGB images and do not need any reduction mechanisms.

Page 19: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

13

Figure 3: Detailed architecture of data collection phase.

Secondly, serialization of data into character array is a major task. Processed

RGB, depth, and skeleton joints are serialized into a string object, in order to send

over the network. The data serialization involves a specific encoding format with

RGB+depth+joints, selection of this flow makes de-serialization simple on the client.

Thirdly, compression using Zlib [16] is an essential part of data collection

phase. Compression further minimizes the redundant data in the serialized string.

Zlib is free, general-purpose, lossless compression library for both Windows and Linux

operating systems.The estimated reduction of actual data(43.95 MBps) is by 50%,

approximately equivalent to 20 MBps. This estimate is based on the analysis per-

formed on a single RGB image of size 900KB. Utilizing both background subtraction

and Zlib compression, this RGB image size is reduced to 141KB, which is 82% re-

duction. But, taking the worst case scenarios into consideration and also scaling to

Page 20: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

14

1.46 MB(each frame size contains RGB, depth, infrared and human skeleton joints)

will justify the estimated 50% reduction. The compressed data is forwarded to the

network phase where the server streams over the network.

3.3 Network phase

Network phase completely deals with client-server communication. Intercon-

nection and setup of this phase are illustrated in figure 4. In order to transmit

data over the network TCP/IP based client-server protocol is implemented Multiple

servers running simultaneously provides large network traffic. To simulate the actual

inter networking of SmartLab environment a local area network is setup with a router

and a network switch. A centralized client collects RGB, depth and infrared data

from individual server. Data obtained by the client is further processed for obtaining

original 3D data.

Figure 4: Network interconnection for SmartLab.

3.4 Centralized client

Figure 5 show the modules in central client, also the implementation details

and hardware specifications of the client system are mentioned. The centralized

Page 21: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

15

client module collects the data from the various servers and display it to the user

or provides a way to utilize the data in some other applications such as human

monitoring.

Figure 5: Centralized client module.

The centralized client is implemented on Intel x86 i7 computer with 12 GB

of RAM. The multi-threaded program runs on the client with each thread dedicated

to the single server. Once the data is collected by the client a series of operations

occur. Firstly, decompression and then de-serialization at last image reconstruction.

RGB, depth and skeleton data can be obtained by any application using a function

call.

Page 22: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

16

CHAPTER 4

IMPLEMENTATION

To build an effective and reliable infrastructure for Smartlab, the system is

modularized to have good scope for re-usability. Figure 6 and 7 shows the modules

of Smartlab architecture, on the higher level the entire Smartlab architecture is

divided into two modules the server module and client module and these modules

are further divided into sub-components. Appendix A explains in detail about, how

to set up Smartlab environment on both Single Board Computers and desktop client.

The functioning of each module and interaction between the sub-components will be

discussed in the following sections.

Figure 6: Class diagram of server module

4.1 Server Module

As discussed in section 3.1, Single Board Computers interact with Microsoft

Kinect and obtain RGB, Infrared, depth and human skeletal joints information. It

Page 23: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

17

is the server module which is set to run on the SBC to obtain data from Microsoft

Kinect sensor. In order to implement this module, the main challenge is finding

the most suitable library that helps to interact with Kinect. After intense research,

we noticed that the top three libraries on the list are OpenNI 2.2 [14], Libfreenect

[17] and Microsoft Kinect SDK 1.8 [18]. The next task is to perform a benchmark

analysis on all these three libraries and select the best amongst them. The factors

which are considered to preform benchmark analysis are as follows: using simple API

calls, multi-platform support, easy integration with other modules, and open source

support. Table 1 provides a comparison overview of all the three Kinect interface

libraries. In addition to the benchmarking features, libraries have been tested by

writing sample programs that actually simulates the server module. Finally, OpenNI

is selected for implementing interaction with Kinect. This library is selected not only

to achieve all our benchmarks but to support wide verity of sensors such as Asus

Xtion [3], structure sensor [4]. We are positive that in the near future with the help

of OpenNI it is possible to replace Microsoft Kinect with any of the aforementioned

sensors. OpenNi 2.2 library provides access to three video streams of the Kinect

namely, depth, RGB, and infrared. However, to obtain the skeleton joints of a

person in the video stream we need to use a third-party middleware library NiTE

[15]. NiTE is developed by Primesense, the same company that developed OpenNI

as well. For network interface Boost.asio [19] library is selected, boost is a header

only library for c++ and also users have the option to select only a few components

of the library for linking the program, this makes it more viable when developing on

low power devices such as Single Board Computers.

Page 24: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

18

Table 1: Benchmark analysis of three Kinect libraries

OpenNI Libfreenect Microsoft KinectSDK

API calls simple to use not that simple Complex

Multi-platform sup-port

Linux, windows,Mac

Linux, Windows,Mac

Only windows

Integration withother modules

Easily integrates Low level driver,complex to inte-grate

Compatible withwindows only soft-ware

Open source support Available Available Support availablefrom Microsoft blog

Overall, the server module is subdivided into different components such as

Kinect data, TCP server, Data structure, and configuration. A multi-threaded ap-

proach is followed to implement server module, Figure 6 shows the overview of the

server module class design and thread execution. The following sections give more in-

formation about each component and how they are coordinated to achieve a reliable

system that runs on low-powered devices.

4.1.1 Kinect Data

Kinect data is a component of the server module that utilizes OpenNI [14]

for obtaining depth and color streams, kinectdata object is instantiated in the main

function and all the operations of the kinectdata class runs on the main thread. The

server module continuously collects data from Kinect on a dedicated program thread.

• kinectdata::run(): run is the first function called from the main method,

this method is responsible for updating the Kinect data between several client

interactions. Initializing Kinect, starting video capture and reading data from

different streams is performed by calling the corresponding function in run()

method. After completion of the each client session, all the previously selected

Page 25: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

19

Kinect streams are suspended thereby making the server module ready for the

next session.

• kinectdata::init kinect(): This function opens the Kinect device instance

and returns STATUS OK signal. If any error occurs in accessing the Kinect

device, an exception is raised and a STATUS ERROR signal is returned. All

other Kinect related functionality such as starting a video stream or reading a

frame happens only after this function is called.

• kinectdata::init videocapture(): init videocapture is responsible for creat-

ing the selected stream among the available three video streams depth, RGB

and infrared. For creating a particular stream the required parameters are to be

initialized openni::device, selected openni::VideoStream, and its corresponding

SensorType. If the selected stream is already created it returns STATUS OK

or otherwise it opens a new stream with default video mode.

• kinectdata::read frames(): Once the client session is initiated the corre-

sponding streams selected by the client are created by kinectdata::init videocapture().

After stream creation, kinectdata::read frames() functions continioulsy looks

for any available stream using openni::OpenNI::waitForAnyStream function for

a predefined timeout, if any stream is available it grabs the frame and writes

to the corresponding data buffer.

4.1.2 TCP Server

TCP server component is responsible for network operations in server mod-

ule, Boost.Asio library [19] is used to implement tcp server. Boost.Asio [19] is a

cross-platform C++ library for network and low-level I/O programming that pro-

Page 26: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

20

vides developers with a consistent synchronous and asynchronous model. tcp server

is basically a synchronous program that creates a TCP socket and continuously ac-

cepts clients on a user defined port. Once the client is connected, the configuration

file is read from the client and the configuration parameters are updated locally so

that the Kinect starts reading the selected frames. Whenever the data is available

from the Kinect, the tcp server sends data to the client over the TCP socket. The

main() method creates a tcp server thread for network operations. The members of

tcp server class are as follows:

• tcp server::run(): The main() method creates a network boost::thread, so

that all the client sessions that only in the network thread. run() method is the

starting point of the tcp server class which will internally call the accept client

method.

• tcp server::accept client(): accept client() method continuously accepts

client connections over the socket and once the client gets connected tcp session

is initiated. If tcp session is completed tcp server again waits on acceptor.accept(socket)

until the next connection

• tcp server::tcp session(): A single tcp server::tcp session() consists of one

configuration read and several data writes Initiallly tcp server reads the con-

figuation sent by the client and updates it to local config file. The data

in data buffer is updated by the thread and set the ready flag. Whenever,

tcp server::tcp session() reads the data ready flag it will initiate the write ses-

sion.

Page 27: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

21

4.1.3 Server Data structure

In the server module, data is obtained from a Kinect and transferred over

the network. However, Kinect data obtained in different formats such as RGB,

depth , infrared and person skeleton joints. In order to format the data obtained

from the Kinect, a data structure is necessary. Server module data structure have

two buffers, one for reading configuration data from the client and second buffer to

collect data from the Kinect. In addition to these buffers, a lock-free queue is also

implemented to share data between the Kinect thread and the network thread. A

static data structure object is shared across different threads to maintain consistency.

The read buffer only holds configuration data received from the client, read buffer

is updated only when a new client session occurs. The write buffer is a complex

structured buffer formed by concatenation of selected data frames. In other words,

if client requests depth, color and body joints, the write buffer is constructed by

appending depth followed by color and body joints. This complex design is chosen

for two main reasons, one is network aspect and other is coordination between the

client and server.

The data structure also houses a synchronous communication mechanism be-

tween Kinect thread and TCP server thread. A lock-free communication system

is implemented in server data structure component, the main intention to select

lock-free mechanism is to avoid thread waiting and improve performance without

mutex locks. Also, server module runs on low power devices, consequently choos-

ing a wait-free mechanism significantly improves performance when compared to

locks. boost::lockfree:spsc queue [19] provided by boost library is thread-safe lockfree-

container. Multiple threads can access lockfree containers without having any syn-

chronization, the abbreviation spsc stands for single producer/single consumer. The

Page 28: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

22

lockfree queue is technically implemented as a ring buffer with ordering mechanism,

unlike mutex locks, lock-free buffers does not have busy waiting, so they are fast and

have less overhead on the operating system. The Kinect thread is responsible for

serializing different data streams into write buffer and then sends it to spsc queue,

where as TCP server thread de-queues the data and sends it to the client.

4.1.4 Server Configuration

The principal design of smartlab is modular in other words each phase of

smartlab is configurable. Users can select the type of data they wants from the

server module, to enable this the server module is equipped with configuration com-

ponent. The config component obtains data from the client and parses it to select

the necessary data stream from Kinect. Configuration buffer is in JSON format as

shown below with parameters either set to true or false.

"VIDEO_CONFIG":{

"IMAGE_CONFIG":{

"RGB_IMAGE":true,

"IR_IMAGE":false

},

"DEPTH_CONFIG":{

"DEPTH_MAP":true,

},

"BODYJOINTS_CONFIG":{

"SKELETON":true

},

"BACKGROUND_SUBTRACTION":{

Page 29: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

23

"RGB_SUBTRACTION":false,

},

"COMPRESSION":{

"SELECTED":true

}

}

For example, in the above configuration RGB, depth images and body joints

are selected. However, background subtraction is set to false and data compression

is enabled. Server program by parsing the config buffer will enable the respective

module and prepare the data before sending to the client.

4.2 Client Module

As discussed in section 3.3 the client module is responsible for collecting data

from multiple servers and provide a way for the user to utilize this data for further

processing. The implementation of client module is different from the server, it

consists a network componet, user interaction component and display component.

However, the data structure and configuration components are almost similar to that

of the server. The network component houses a TCP client module to communicate

with the TCP server. The client module dedicates a thread to each server, data

collected by each thread is added to a one dimensional array of lockfree queue.

Figure 7 shows the implementation of multi-threaded client module. The following

sections describe more about sub components of client module.

Page 30: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

24

Figure 7: Class diagram of client module

4.2.1 TCP Client

The TCP client component is responsible for interacting with a server, to send

user configuration and retrieve data accordingly. The implementation of TCP client

is similar to that of the server. Boost.Asio [19] the library used for implementing

server is also used for the client. The client module creates an individual thread

for each server and assigns them the task to continuously interact with the server.

Each TCP client thread connects to its corresponding server using IP address and

port number read from the configuration file. In addition, to connection details, each

thread is assigned an id number so that they can write the retrieved data into a buffer

with the same id. The static member function run() of tcp client class is passed to

boost::thread with connection details and server id number. The run() function

establishes the connection to the server and initiates a TCP session. The following

tcp client class members give more details about the TCP Client component.

• tcp client::run(): run() is the static member function that is first called

when initializing the client thread. The parameters that are required to be

passed to this function are server details such as port number, IP address

Page 31: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

25

and Id number. After initializing all the parameter in tcp client object, it is

connected to the server.

• tcp client::connect server(): The client thread connects to the server using

IP address and port number read from the configuration file by the main thread.

• tcp client::tcp session(): After the connection is established, client thread

writes the video configurations read from config.json file to the server. The

server then writes the serialized data based on the configuration parameters

and is followed by writing data to the lock-free buffer.

4.2.2 Client Data structure

The data structure of client module is very similar to that of server except

for one-dimensional lock-free buffer. This data structure also has a config buffer and

a read buffer, the config.json file is read into config buffer which is later sent to the

server. The serialized data received from the server is read into the read buffer. In

addition, to these two buffers one-dimensional lock-free queue is also initialized and

the client thread queues the read buffer to the respective index of the lock-free buffer

queue.

4.3 Background Subtraction

Background subtraction is a major pre-processing step in many vision based

applications. If we have a static camera we can always subtract the background

from the current frame and obtain only the changes, but this is not easy when the

room is illuminated with multiple lights. The shadows of the objects confuse vi-

sion algorithms. In order to avoid this MOG2 [6] background subtractor is utilized.

Page 32: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

26

This subtraction algorithm is Gaussian Mixture-based Background/Foreground Seg-

mentation Algorithm. One important feature of this algorithm is that it selects the

appropriate number of Gaussian distribution for each pixel. It provides better adapt-

ability to varying scenes due to illumination changes etc. Including this background,

subtraction not only reduces the data to be transferred but also eliminates initial

preprocessing of data for many vision applications.

4.4 Data Compression

Data compression step in server module is performed before the network trans-

mission. The use of data compression excludes the redundant data and decreases

the load on the network. On the client side, the received data is decompressed and

deserialized in further steps. Zlib [16] is the lossless compression library that is used

in the smartlab application.

Page 33: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

27

CHAPTER 5

RESULTS AND EVALUATIONS

The foremost objective of SmartLab Infrastructure Project is to set up a

reliable infrastructure to collect 3D data from depth sensors. In order to achieve this

goal we have arranged hardware in the Pixel Island Lab and developed software for

it to attain a reliable data acquisition rate. This chapter illustrates how hardware

components such as Microsoft Kinect and Single Board computers are coupled with

each other. Furthermore, an analysis of data acquisition rates on different SBC’s is

presented.

5.1 Smartlab Hardware setup

Smartlab infrastructure involves two components, motion sensing sensor Mi-

crosoft Kinect and Single Board Computer. Firstly, the Kinects are mounted on the

ceiling of the Pixel Island Lab.

Figure 8: Arrangement of Microsoft Kinects in Pixel Island

Page 34: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

28

The Kinects are arranged to cover most of the lab, figure 8 shows a part of the

lab where Microsoft Kinects are arranged. Each Kinect is connected with a Single

Board Computer(SBC) and these SBC’s are connected to the network through a

switch. Figure 9 shows an individual Kinect and SBC pair.

(a) Kinect mounted on ceiling(b) Single Board Computers

Figure 9: Microsoft Kinect connected to SBC.

5.2 Smartlab system usage

Infrastructure for SmartLab is set up with a focus to provide reliable and

effective data collection mechanism for research. The SmartLab system can be ex-

tended to implement sophisticated applications such as Facial recolonization, Object

detection and health care analysis. As a naive step to implement such complex

application, a face detection program is developed. Figure 10 shows results for face

detection application. Similar to this application researchers can make great progress

and develop many other sophisticated applications.

Page 35: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

29

Figure 10: Face detection application using SmartLab

5.3 Smartlab software results

Smartlab Infrastructure project comprises of two software modules server

module and the client module. The server module collects different data streams of

Microsoft Kinect 360 [1] such as depth, color(RGB), IR and human skeleton joints.

Client module can select individual streams or combination of the streams and re-

ceive data accordingly. Along with data streams, data compression and background

subtraction are additional features that can be selected in server module. The per-

formance of the smartlab system is analyzed on two factors. Firstly, the frame rate

at which the server module is capturing data from Kinect. Typically, the default

output video frame rate of Kinect 360 is 30fps. But, the computational capability of

the host machine effects the capture frame rate. Secondly, the data retrieval rate at

Page 36: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

30

client side is also taken into consideration for analyzing network bandwidth utiliza-

tion. Analysis is done on selecting each data stream and the results are presented in

the following sections

The hardware components utilized in the analysis are two Single Board Com-

puters(SBC) Odroid-XU4 [13], Lattepanda [12] and depth sensor Microsoft Kinect

360 [1]. Odroid-XU4 has cortex A15 2GHz processor with 2GB of RAM and Lat-

tepanda has intel quad core 1.8GHz processor and 2GB of RAM. Odroid runs on

linux operating system where as Lattepanda is windows SBC. The server module is

tested on the above mentioned SBC’s and the client module is tested on general PC

with an i7 processor, 12 GB of RAM and Gigabit ethernet port. Odroid-XU4 also

has a gigabit ethernet port and Lattepanda has 100Mbits Ethernet capability. The

aforementioned properties of the hardware impacts data capture and transfer rate.

The following sections discuss more on how a trade off occurs between computational

power and network bandwidth usage.

5.3.1 Depth data stream

A depth frame collected from the Kinect sensor is 640x480 pixels with 2 bytes

(by default 11 bits per pixel but computers store each pixel in 16 bits) for each pixel.

This sums up to 614,400 bytes collected for each frame. The default frame rate of

depth stream is 30 fps(frames per second). So, the total data collected from Kinect

by SBC per second is 17.578MBytes(140.625Mbits). Figure 12a is depth stream out-

put on client side. Odroid and Lattepanda are efficient to capture the data at 30

frames per second. However, Odroid is able to stream data to the client at 30 fps

as evident from the Figure 12b, but Lattepanda suffers from bandwidth limitations

while transferring data over the network, considering it’s 100Mbps LAN port. Fig-

Page 37: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

31

ure 12c shows data receiving rate on the client side when depth data is streamed

from Lattepanda and the frame rate is approximately equivalent to 20fps. However,

data compression reduces the load on the network but computation performed for

compression reduces the data capture rate in SBC. Subsequently, the frame rate on

the client is reduced to 16fps. Figure 12b and 12c shows network throughput of

compressed depth data. This is incident both in Odroid and Lattepanda.

Page 38: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

32

(a) depth stream output on client module (b) network throughput for depth streamfrom odroid

(c) network throughput for depth stream from Lattepanda

Figure 11: Depth stream output without data compresssion

Page 39: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

33

(a) depth stream frame rate with compression from Odroid

(b) network throughput for depth streamwith compression from Odroid

(c) Network throughput for depth streamwith compression from Lattepanda

Figure 12: depth stream result with data compresssion

5.3.2 Color data stream

The default color frame uses 8-bit video resolution (640x480 pixels and each

pixel have three 8-bit channels each for storing R, G and B values) as shown in

figure 13a . At default frame rate of 30fps, the data captured by the SBC is

26.367MBytes(210.937Mbits). Odroid XU4 is successful in streaming the data at

30fps but Lattepanda is not that efficient and it is able to attain an average frame

rate of 10fps. Figure 13b and Figure 13c shows the network statistics of color frame

Page 40: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

34

streaming done by Odroid and Lattepanda.

(a) color stream(RGB) output on client mod-ule

(b) network throughput for color stream fromodroid

(c) network throughput for color stream from lattepanda

Figure 13: Color stream output without data compression

Data compression of color streams also has an impact on data capture rate

of the SBC. The data capture frame rate is reduced to 9 fps in odroid and 4 fps

in Lattepanda(originally 30 fps from Kinect). Utilization of background subtraction

further reduces the data capture rate and but provides a better compression result.

Page 41: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

35

Figure 14 shows the results obtained by utilizing data compression and background

subtraction.

(a) network throughput with data compress-sion

(b) frame rate of color stream with data com-presssion

(c) color stream output with backgroundsubtraction and data compresssion

(d) network statistics for color stream outputwith background subtraction and data com-presssion

Figure 14: color stream output without data compression

Page 42: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

36

5.3.3 Depth and Color data combination

Figure 15: depth and color image streams output without compression

Depth data and color(RGB image) data together sum up to 43.95Mbytes at a

frame rate of 30fps. When the same data is collected from two SBC’s then the size is

doubled equivalent to 2x43.95 Mbytes(703.125 Mbits). Two Odroids simultaneously

sending data to client are comfortably able to achieve 100% frame rate(30 fps),

where as single Lattepand is able to achieve only 8 fps. Figure 16 shows the network

throughput obtained from two odroids and Figure 17 shows throughput from single

Lattepanda.

Page 43: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

37

Figure 16: network statistics for depth and color stream collected from two odroids

Figure 17: network statistics for depth and color stream collected from single Lat-

tepanda

Page 44: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

38

5.3.4 Factors influencing SmartLab software performance

In order to analyze the SmartLab system, different performance metrics have

been tested by setting up a wired connection between the SmartLab server and client.

Firstly, network bandwidth of the system including the Ethernet specifications of

SBC and desktop client. Table 2 and Table 3 shows that Odroid is able to send

both color(RGB) and depth data to the client at 30 frames per second. On the

other hand, Lattepanda is able to achieve only 8 fps. The major difference between

these two boards is Ethernet port, Odroid has a Gigabit port where as Lattepanda

has 100Mbps port. Therefore, network bandwidth seriously effects the data transfer

rate and it is similar case with wireless network. Secondly, the data compression

rate on the server side also effects the SmartLab performance. Table 4 and Table 5

shows data compression results. From the results it is evident that data compression

effects the data collection rate of the server, because most of the computational power

is invested on compression. However, compression reduces the load on network as

shown in Table 5. In addition, to the aforementioned two factors processing power

and type of compression technique also effects the performance of SmartLab.

Note: All the experiments were conducted in wired network.

Table 2: Data capture and receiving rate without compression on Odroid and Lat-

tepanda

Odroid Lattepanda

Analysis Factor Capture rate(in fps)

Receiving rate(in fps)

Capture rate(in fps)

Receiving rate(in fps)

Depth 30 30 30 20

Color 30 30 30 13

Depth & Color 30, 30 30, 30 29, 27 8

Infrared 30 30 30 30

Page 45: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

39

Table 3: Network throughput with out compression

Odroid Lattepanda

Analysis Factor data collected(In Mbps)

networkthroughput(In Mbps)

data collected(In Mbps)

networkthroughput(In Mbps)

Depth 140 153 140 98.7

Color 210 225 210 67

Depth & Color 351 370 328 85

Table 4: Data capture and receiving rate with compression on Odroid and Lat-

tepanda

Odroid Lattepanda

Analysis Factor Capturerate(In fps)

Receivingrate(In fps)

Capturerate(In fps)

Receivingrate(In fps)

Depth 17 17 4 4

Color 9 9 3 3

Depth & Color 5, 5 5, 5 2, 2 2,2

Color backgroundsubtracted

11 11 5 5

Table 5: Network throughput with compression

Odroid Lattepanda

Analysis Factor data collected(In Mbps)

networkthroughput(In Mbps)

data collected(In Mbps)

networkthroughput(In Mbps)

Depth 79 9.3 18.73 1.3

Color 63.21 55.5 21.09 7

Depth & Color 59 48 23.43 6.9

Page 46: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

40

CHAPTER 6

CONCLUSION AND FUTURE WORK

The impact of Smart environments on human life is continuously increasing,

with the advent of low priced hardware. The research in the development of smart

environments is revamping. In order to realize our dream to have a smart environ-

ment in Pixel Island lab at Texas A&M University-Corpus Christi, the SmartLab

project has been started. As an initial step of this extensive project, we proposed

a novel and promising system for SmartLab infrastructure using Single Board Com-

puters and depth sensor like Microsoft Kinect. The main motivation to build this

system is to provide an infrastructure to the researchers in order to efficiently set up

data collection mechanism that can be employed in SmartLab applications such as

security surveillance, human behavior analysis, and smart interaction etc.

The SmartLab infrastructure is started with the focus on having a compact

and cost effective system for data collection. So, the experiments are conducted with

the pocket sized Single Board Computers to collect data from depth sensors like Mi-

crosoft Kinect. Another reason for using SBC’s is their cost effectiveness (typically

below $100). Although these SBC’s have less computation power as compared to

general purpose PC’s, our research demonstrated that they are a good enough for

developing smart environments. We experimented with a variety of Single Board

Computers such as Odroid XU4 and Lattepanda. The experimental results revealed

that Odroid outperforms both in wired and wireless connection. On the other hand,

Lattepanda provides acceptable frame rate for image processing or any other appli-

cations that use 3D data on a wired network.

We utilized multi threading principles and also avoided busy waiting using

Page 47: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

41

lock-free buffers to overcome computational limitations of Single board computer.

This approach boosted the data capture rate from Kinect on the server side and also

increased the data retrieval rate on the client side. Moreover, Smartlab software is

also equipped with background subtraction as well as data compression mechanism

and the experimental results reveal that the aforementioned system reduces band-

width usage of each server. Additionally, in the SmartLab system, we also provide

an easy access to specific data that can be one of the fallowing data streams: depth,

color, infrared and body joints unlike other systems where we get everything together

and segregation has to be performed separately to get specific data stream. Overall,

the SmartLab infrastructure project created a reliable and scalable infrastructure for

data collection and processing.

6.1 Future work

The proposed design for creating infrastructure for smartlab can be improved

in multiple ways. In this section, the future work is discussed by highlighting the

addtional components that can be integrated to server module and also the improve-

ments that can be made to client module.

• Server Module: As a part of implementing server on a Single Board Com-

puter, different SBC’s are utilized specifically Odroid-XU4, Lattepanda and

Raspberry PI. Though all these Single Board Computers are efficiently used

for data collection and transferring, their capability is not completely exploited.

To utilize the complete computational power of SBC, all the server components

should run in parallel. At present, the server implementation has data acqui-

sition thread and network thread but it can be extended to have individual

thread for data compression as well as background subtraction. Having mul-

Page 48: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

42

tiple threads also reduces the throughput as synchronization between threads

is expensive. However, it would be ideal to implement all the components in

parallel and compare the results with the existing system.

The data collected from the Kinect contains RGB, depth, infrared and

body joints. However, Microsoft Kinect has multi-array microphone, this en-

ables the server module to collect voice commands as sound signals. Implemen-

tation of sound component in server module is part of future research. The

Zlib compression library utilized for data compression on server side is compu-

tationally expensive, implementing a state of the art compression technique is

a necessary step in future.

• Client Module: In the client module, the interest is more about easy extensi-

bility of the project by fellow researchers. The client module is coded in C++

but it is possible to integrate Python code as well using Boost.Python [19].

Since Python is the widely used language for data analysis and machine learn-

ing, it would be more convenient to code when Boost.Python [19] is integrated

in to the client module.

Page 49: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

43

REFERENCES

[1] Microsoft Kinect developer page. https://developer.microsoft.com/en-us/

windows/kinect. Accessed: 07-30-2017.

[2] Mario Martınez-Zarzuela, Miguel Pedraza-Hueso, Francisco Javier Dıaz Pernas,

David Gonzalez Ortega, and Miriam Anton-Rodrıguez. Indoor 3d video moni-

toring using multiple kinect depth-cameras. CoRR, abs/1403.2895, 2014.

[3] Asus Xtion specifications. https://www.asus.com/3D-Sensor/Xtion_PRO/.

Accessed: 07-30-2017.

[4] Structure sensor developer page. https://structure.io/. Accessed: 07-30-

2017.

[5] Jungong Han, Ling Shao, Dong Xu, and Jamie Shotton. Enhanced computer

vision with microsoft kinect sensor: A review. IEEE transactions on cybernetics,

43(5):1318–1334, 2013.

[6] Zoran Zivkovic. Improved adaptive gaussian mixture model for background

subtraction. In Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th

International Conference on, volume 2, pages 28–31. IEEE, 2004.

[7] Roberto Arroyo, J. Javier Yebes, Luis M. Bergasa, Ivan G. Daza, and Javier

Almazan. Expert video-surveillance system for real-time detection of suspicious

behaviors in shopping malls. Expert Systems with Applications, 42(21):7991 –

8005, 2015.

Page 50: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

44

[8] Julie A Kientz, Shwetak N Patel, Brian Jones, ED Price, Elizabeth D Mynatt,

and Gregory D Abowd. The georgia tech aware home. In CHI’08 extended

abstracts on Human factors in computing systems, pages 3675–3680. ACM, 2008.

[9] Vincent Ricquebourg, David Menga, David Durand, Bruno Marhic, Laurent

Delahoche, and Christophe Loge. The smart home concept: our immediate

future. In E-Learning in Industrial Electronics, 2006 1ST IEEE International

Conference on, pages 23–28. IEEE, 2006.

[10] G Michael Youngblood, Diane J Cook, and Lawrence B Holder. The mavhome

architecture. Department of Computer Science and Engineering University of

Texas at Arlington, Techinal Report, 33, 2004.

[11] Maurizio Caon, Yong Yue, Julien Tscherrig, Elena Mugellini, and O Abou

Khaled. Context-aware 3d gesture interaction based on multiple kinects. In

Proceedings of the first international conference on ambient computing, applica-

tions, services and technologies, AMBIENT, pages 7–12. Citeseer, 2011.

[12] Lattepanda. [Online; accessed 21-July-2017].

[13] Odroid xu4 - hardkernel. [Online; accessed 21-July-2017].

[14] Openni 2.2. [Online; accessed 21-July-2017].

[15] Nite software. http://openni.ru/files/nite/index.html. Accessed 21-

July-2017.

[16] Zlib manual. https://www.zlib.net/manual.html. Accessed: 07-30-2017.

[17] Libfreenect. Accessed 21-July-2017.

Page 51: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

45

[18] Microsoft kinect sdk. Accessed 21-July-2017.

[19] Boost library. [Online; accessed 21-July-2017].

Page 52: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

46

APPENDIX A

HOW TO SET UP SMARTLAB ENVIRONMENT

Setting up the smartlab environment is a two stage process. Firstly, we need to setup

the server environment and subsequently the client. From the architecture diagram

2, it is noticeable that smartlab server runs on Single-Board Computer whereas the

client is meant to be run on a general-purpose PC. Smartlab server is designed in

such a way that it can run on both Linux and Windows operating systems. In this

section, we will look at setting up the environment for the smartlab server in both

Linux and Windows.

7.1 Environment for SmartLab server (Linux Environment- ODROID XU4)

This section walks through the steps involved in preparing the environment

for a smartlab server on a Linux Single-Board Computer. ODROID XU4 is a Sin-

gle Board Computers with ARM architecture, It has an arm-cortex-a7 processor

with 2.0GHz clock speed. Odroid supports Debian distributed operating systems

(Ubuntu) as well as Embedded operating-system (Android). For developing smart-

lab infrastructure we have selected Ubuntu 15.04 with Mate desktop as our host op-

erating system, all the additional dependency libraries required for running smartlab

server are listed below:

• OpenCV 3.2

• OpenNI 2.2 (Libfreenect or Primesenese driver required)

• Libfreenect(for Kinect Xbox 360)

• Boost 1 63 0 (For Boost.asio network library)

Page 53: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

47

7.1.1 Ubuntu Image

Firstly download the Ubuntu image from this link. Burn the image into an

SD card with Win32DiskImager(or any other similar software). Insert the SD card

into ODROID and power it on, if it prompts for password type ’odroid’. Connect

the odroid to internet. The very first step is to install the necessary software with

the following commands.

#remove the unwanted so f tware which comes with the Ubuntu imagesudo apt−get updatesudo apt−get remove −−purge l i b r e o f f i c e ∗ plank simple−scan

sho twe l l imagemagick∗ pidg in hexchat thunderbird brase ro kodirhythmbox xzoom gnome−orca onboard a t r i l mate−u t i l s s eahor s et i l d a

sudo apt−get purge f i r e f o xsudo rm −r f ˜/ . moz i l l a / f i r e f o x ˜/ . macromedia ˜/ . adobe / e t c /

f i r e f o x / usr / l i b / f i r e f o x / usr / l i b / f i r e f o x−addonssudo apt−get c l eansudo apt−get autoremove

#Now i n s t a l l ext ra dependenc ies which are e s s e n s t i a lsudo apt−get updatesudo apt−get d i s t−upgradesudo apt−get i n s t a l l bui ld−e s s e n t i a l c h e c k i n s t a l l cmake cmake−

curses−gui pkg−c on f i g gparted guvcview lightdm−gtk−gree t e r−s e t t i n g s

7.1.2 Installing OpenCV

On the smartlab server OpenCV is required for image processing opera-

tions(eg: background subtraction) and in future, we can also select a region of interest

from the camera feed. Follow the steps to build OpenCV from source.

#I n s t a l l OpenCV dependenc iessudo apt−get i n s t a l l bui ld−e s s e n t i a lsudo apt−get i n s t a l l cmake g i t l i b g t k 2 .0−dev pkg−c on f i g

l ibavcodec−dev l ibavformat−dev l i b sws ca l e−devsudo apt−get i n s t a l l python−dev python−numpy l ib tbb2 l ibtbb−dev

Page 54: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

48

l i b j p eg−dev l ibpng−dev l i b t i f f −dev l i b j a s p e r−dev l ibdc1394−22−dev

#Get Opencv from source ” https : // github . com/opencv/opencv/ arch ive/ 3 . 2 . 0 . z ip ” OpenCV 3 .2

cd ˜g i t c l one https : // github . com/opencv/opencv . g i tcd ˜/opencv#Build and i n s t a l l OpenCVmkdir bu i ldcd bu i ldcmake −DCMAKE BUILD TYPE=RELEASE −DCMAKE INSTALL PREFIX=/usr /

local −DWITHOPENGL=ON −DWITH V4L=ON −DWITH TBB=ON −DBUILD TBB=ON −DENABLE VFPV3=ON −DENABLENEON=ON . .

make −j 4sudo make i n s t a l l

7.1.3 Installing OPENNI

OpenNI provides access to PrimeSense compatible depth sensors such as Mi-

crosoft Kinect, Asus Xtion, etc. It allows an application to initialize the sensor and

obtain RGB, depth, and IR video streams from the device. OpenNI in coordination

with NiTE middleware provides Skeleton tracking. But NiTE libraries are not avail-

able for ARM architecture, so we can’t install NiTe on ODROID. The subsequent

steps help you to build OpenNI libs from source.

#I n s t a l l addt iona l dependenc ies f o r OpenNIcd ˜sudo apt−get i n s t a l l −y g++ python l ibusb −1.0−0−dev l ibudev−dev

openjdk−6−jdk f r e e g l u t 3−dev doxygen graphviz#Get OpenNI from source https : // github . com/ o c c i p i t a l /OpenNI2

OPENNI 2.2</a>g i t c l one https : // github . com/ o c c i p i t a l /OpenNI2cd OpenNI2#Build OpenNI l i b r a r i e s f o r ARM ar ch i t e c t u r ePLATFORM=Arm makecd Packaging && python Re leaseVers ion . py Armmv Fina l /OpenNI−Linux−Arm−2.2 . ta r . bz2 ˜cd ˜

Page 55: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

49

ta r −xvf OpenNI−Linux−Arm−2.2 . ta r . bz2rm −r f OpenNI2rm OpenNI−Linux−Arm−2.2 . ta r . bz2#I n s t a l l OpenNIcd OpenNI−Linux−Arm−2.2sudo sh i n s t a l l . sh

7.1.4 Installing Libfreenect

Libfreenect is a userspace driver for the Microsoft Kinect. It runs on Linux,

OSX, and Windows and supports RGB and depth Images, Motors, Accelerome-

ter and LED. Libfreenect is open source driver and interacts with sensor raw data

streams, this driver acts as an underlying layer between Kinect hardware and high-

level layer OpenNI. Here are the instructions to build Libfreenect in Linux environ-

ment

#I n s t a l l dependenc iescd ˜sudo apt−get i n s t a l l libxmu−dev l i b x i−dev l ibusb−dev#Fetch the codeg i t c l one https : // github . com/OpenKinect/ l i b f r e e n e c tcd l i b f r e e n e c t#Build l i b r a r i e scd l i b f r e e n e c tmkdir bu i ldcd bu i ldcmake . . −DBUILD OPENNI2 DRIVER=ONmake −j 4

#Copy L ib f r e en e c t d r i v e r to OpenNI d i r e c t o r yRepos i tory=˜/OpenNI−Linux−Arm−2.2/ Redist /OpenNI2/Dr iver s /cp −L l i b /OpenNI2−FreenectDr iver / l i bFr e ene c tDr i v e r . so ${

Repos i tory }

#Copy Kinect r u l e s to user devsudo cp ˜/ l i b f r e e n e c t / plat form/ l inux /udev/51−k ine c t . r u l e s / e t c /

udev/ r u l e s . d

Page 56: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

50

7.1.5 Installing Boost

Boost is a set of libraries for the C++ programming language that provides

support for multi-threading, network communication, regular expressions, and com-

pression. It contains over eighty individual libraries. Install boost by following this

link

7.2 Environment for Smartlab server (Windows Environment-Lattepanda)

Lattepanda is x86 Single board computer and it is able to run windows 10

operating system. This SBC provides the advantage of utilizing Microsoft driver for

accessing Kinect. It has 1.8GHz clock speed and 2GB of DDR3 RAM and 100Mbps

Ethernet port. Windows board is specifically chosen to analyze the advantage of

using Microsoft SDK. The fallowing are the set of libraries that are to be installed

on Lattepanda. Another advantage for choosing x86 SBC is to obtain Skeleton joints

using NiTE middleware.

• OpenCV 3.2

• OpenNI 2.2

• NiTE 2.0

• Microsoft Kinect SDK 1.8(driver for Kinect Xbox 360)

• Boost 1 63 0 (For Boost.asio network library)

7.3 Building SmartLab server in Linux

SmartLab server is build in linux using cmake, and executable is generated.

Before creating the executable make sure that all the dependencies are configured

Page 57: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

51

and appropriate paths are set. There after compile the code link the binary with all

dependencies. The following CMakeLists.txt file illustrates the flow of command for

building SmartLab server.

p ro j e c t ( smar t l ab s e rv e r )cmake minimum required (VERSION 2 . 8 )# CONFIGURE OPENCVf ind package (OpenCV REQUIRED)# CONFIGURE BOOSTf ind package ( Boost REQUIRED COMPONENTS system thread io s t r eams )# CONFIGURE OPENNI2f i n d l i b r a r y (OPENNI2 LIBRARY

NAMES OpenNI2PATHS ”˜/OpenNI−Linux−Arm−2.2/ Redist ”

)f i nd path (OPENNI2 INCLUDE DIR OpenNI . h

PATHS ”˜/OpenNI−Linux−Arm−2.2/ Inc lude ”)# CREATE EXECUTABLEl i n k d i r e c t o r i e s ( ${OPENNI2 LIBRARY})i n c l u d e d i r e c t o r i e s ( ${OPENNI2 INCLUDE DIR} ${Boost INCLUDE DIRS})f i l e (GLOB RECURSE SRC FILES s r c /∗ . cpp )add executab le ( ${PROJECTNAME} ${SRC FILES})t a r g e t l i n k l i b r a r i e s ( ${PROJECTNAME} ${OPENNI2 LIBRARY} ${

OpenCV LIBS} ${Boost LIBRARIES})

7.4 Building SmartLab server in Windows

In windows platform SmartLab is build using Visual studio and all the includes

and library dependencies are configured in in Project solution. The executable can

be moved into another system and it can be executed perfectly fine. But, OpenNI

headers and library are required along with the executable

Page 58: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

52

7.5 Running SmartLab server in Linux

Initializing the server script on SBC startup is easy way to run smartlab server.

To make a startup script, create a shell script mystartuo.sh with following contents

under /etc/init.d/ directory and provide executable permissions. Also, create an-

other script named startupscript.sh under home directory and provide executable

permissions. Start the mystartup service with the instructions shown in the listing.

########## / etc / i n i t . d/mystartup . sh #############!/ bin /bashecho ” S ta r t i ng smartlab s e r v e r ”# smar t l ab s e rv e r s c r i p t path/home/ odro id / s t a r t up s c r i p t . sh

######## s t a r t up s c r i p t . sh ###########!/ bin /bashecho ”Smartlab s e r v e r s e s s i o n s t a r t ed ” > ˜/ s t a r tup l og . txtdate >> ˜/ s t a r tup l og . txt#Running the smartlab s e r v e r s c r i p t˜/ servermodule−l inux−arm/ smar t l ab s e rv e r >> ˜/ s t a r tup l og . txt 2>&1

#s t a r t your s e r v i c es e r v i c e mystartup . sh s t a r t

#I n s t a l l s e r v i c e to be run at boot−timeupdate−rc . d mystartup . sh d e f a u l t s

7.6 Running SmartLab server in Windows

In windows initiating the SmartLab server is a easy task, just copy the server

module executable to startup folder. Everytime when windoes SBC boots smart-

lab server is initiated.

Page 59: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

53

7.7 SmartLab client

SmartLab is designed for desktop PC, and it is platform independent. On

windows platform SmartLab client application is compiled using Visual studio, where

as on linux platforms cmake is used. Building the executable in both platforms is

similar to that of server.

7.8 Running SmartLab client

For running SmartLab client module necessary details required are server IP

address and port number. Place these details in config.json file and also set the

selected data stream to true. The config.json file is shown below.

{”NETWORKCONFIG” :{

”SERVER1” :{”IP ADDRESS” : ” 1 2 7 . 0 . 0 . 1 ” ,”PORTNO” :2000

}} ,”VIDEO CONFIG” :{

”SOURCE CONFIG” :{”KINECT” : true ,”CAMERA” : fa l se

} ,”IMAGE CONFIG” :{

”BGR IMAGE” : true ,”IR IMAGE” : fa l se

} ,”DEPTH CONFIG” :{

”DEPTHMAP” : true ,”POINT CLOUDMAP” : fa l se

} ,”BODYJOINTS CONFIG” :{

”SKELETON” : true} ,”BACKGROUNDSUBTRACTION” :{

”RGB SUBTRACTION” : false ,”DEPTH SUBTRACTION” : fa l se

Page 60: Design and Implementation of SmartLab Infrastructuresci.tamucc.edu/~cams/projects/521.pdflike Raspberry pi, Odroid-XU4 and Lattepanda were tested and found that all the three boards

54

} ,”COMPRESSION” :{

”SELECTED” : fa l se}

}}

Save the config.json file in the same folder where executable is located. Run the

client executable to view the selected data stream.