39
Developing a Test Harness for TagCentric Swathi Musunuri MS in Computer Science Committee Dr. Craig Thompson Dr. Brajendra Panda Dr. Dale Thompson

Test harness for TagCentric Software

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Test harness for TagCentric Software

Developing a Test Harness for TagCentric

Swathi MusunuriMS in Computer Science

CommitteeDr. Craig Thompson

Dr. Brajendra Panda

Dr. Dale Thompson

Page 2: Test harness for TagCentric Software

Outline

Introduction

Problem

Thesis Objective

Background & Related Work

Test Harness Methodology

Implementation

Test Results & Analysis

Conclusion

Summary

Contribution

Future Work

Page 3: Test harness for TagCentric Software

Introduction –

Problem

UA TagCentric RFID middleware released on SourceForge in 1Q87. Since that time, 2000 downloads …. but …

Lack of knowledge of the limits of TagCentric related to scalability and performance

How many RFID readers can TagCentric handle?

What polling times does it support?

Where does TagCentric fail, why and what exactly fails?

[Note: other possible weaknesses not considered in this thesis: security, reliability, usability.]

Page 4: Test harness for TagCentric Software

Introduction –

Thesis Objective

To develop and use a principled means (in the form

of a test harness) to identify performance and

scalability boundary conditions for the TagCentric

RFID middleware application.

To identify the situations where the application

performs well and the situations where it will fail.

Page 5: Test harness for TagCentric Software

Background –

TagCentric RFID Middleware

Reader2 Tag Printer Motion SensorReader1 Camera

GUI DashboardDBMS …

XML messages sent between “agents”

…Device

Wrappers

UA TagCentric RFID middleware is an application of the Ubiquity agent system 4 RFID reader types supported: Alien, Symbol, Thingmagic, and “Fake”.

1 Tag printer supported: Zebra

5 databases supported: DB2, Derby, MySQL, Oracle, Postgres

TagCentric Open Source Toolkit available on SourceForge

Page 6: Test harness for TagCentric Software

Background –

Application Testing Methods

Systemic problems

Scalability, Performance, Security,

Reliability, Usability, Understandability

Ad-hoc vs systematic approach

Instrumentation, Measurement, Tuning

Why? – to capture application-specific

activity

How? – adding Probes, Gauges, Dials,

Monitoring, Profiling

Exposing bottlenecks and hotspots

Objects consuming most memory?

Operations consuming most of the time?

Configure application parameters to

improve performance

Page 7: Test harness for TagCentric Software

Background –

Test Harness

An instrumented framework for regression testing of software

under controlled conditions

Identify and isolate potential problem areas so as to tackle

each of them in a systematic manner

Consists of an exoskeleton of probes, gauges and dials

Probes - small pieces of code inserted into an application to get back

useful information about its output & behavior.

Gauges - monitor and display the behavior of the application

Dials - provide knobs that can be turned to tune the program to

desired service levels

Page 8: Test harness for TagCentric Software

Related Work –

Tools and Techniques

Monitoring & Management Tools

JConsole

Profiling Tools

NetBeans IDE 4.x Profiler

JProbe Suite

Resource Consumption Tools

Application Manager

RAMPAGE

Bandwidth Monitor

Page 9: Test harness for TagCentric Software

Focus –

Scalability

Two kinds of scalability Vertical – make a unit bigger or faster - to add more

capacity, improve a unit to do more

Horizontal – make more units - to add more capacity, add more units.

For example 1 crane vs. 1000 wheelbarrows

1 giant apple vs. 1000 apple trees

RFID middleware examples 1000 readers at a warehouse

Fast polling time per reader

Multicast by each reader over one network

One database for 1000 readers

Scalability (horizontal or vertical) is the ability to add capacity to accommodate growth

Business needs capacity planning Don’t want to buy too much or too little capacity for a

given problem. Want assurances your configuration will operate correctly under varying conditions. Want cost estimates

Page 10: Test harness for TagCentric Software

Focus –

Performance

Throughput – what are the bottlenecks?

Number of tags a reader could read in unit time

Number of tag read records inserted into database

Failure Point

Number of tags a reader misses reading in unit time

GUI Agent’s Unresponsiveness

At what point does TagCentric become unresponsive?

What breaks? – agent communication /Reader /GUI

Resource Utilization

Consumable and concurrent resources

CPU utilization?

Memory utilization?

Disk utilization?

Network Bandwidth?

CPUs, RAM memory, Disk Space

Databases : MySQL, Oracle

Page 11: Test harness for TagCentric Software

Test Harness –

Framework for asking questions (four tests)

Performance Test

How many tags are read and dropped by a TagCentric reader in unit time? What are the effects of poll period, test length, and test environment (isolated/distributed) on TagCentric performance

Tuning JVM Heap Size Test

How many readers can fit into a single JVM? What are the effects on TagCentric scalability and performance if we tune JVM heap size (10, 64, 128, 256, 512 or 1024 MB) Is TagCentric memory bound? Effects of RAM memory and JVM heap sizes

Scalability Test

How many fake readers can TagCentric support at any time? Can we improve this number by adjusting parameters or adding more hardware? Is TagCentric CPU bound (vertical scalability)?

Load Test

What are the limits when we add more clients (horizontal scalability)?

Page 12: Test harness for TagCentric Software

TagCentric Test Harness –

Methodology for answering questions

Understand TagCentric architecture to determine possible failure points

Instrument TagCentric with probes and gauges

Measure and collect data from TagCentric under varying conditions

Profile the application to identify memory leaks, deadlocks or thread contention

Analyze results to identify bottlenecks - Graph the usage per resource & plot how that changes over time

Tune parameters like Poll Period and JVM Heap size and iterate

Page 13: Test harness for TagCentric Software

Testing Considerations –

Measures & Test Configurations

Test Conditions Tests performed in the RFID Lab of the CSCE Department at UARK

Test run on Windows platform only

Tests confined to TagCentric fake reader type only

Measures Test Time – For how long do tagged items move across the reader?

Poll Period (PP) – Frequency at which the reader agent polls the reader Polling more often ~ More stress for reader (items move quickly) ~ lower PP

Polling less often ~ Tagged items move relatively slowly ~ higher PP

Range of values = 100 ms, 200 ms, 500 ms & 1000 ms

Least allowable value = 100 ms So a fake reader’s maximum rate at which it can handle throughput is 100 ms! – One boundary condition found!!

Throughput – # of tags that a fake reader reads in unit time (tags/sec)

JVM Heap size

Resource Utilization Stats: CPU, Memory, Disk, Network

# of Fake Readers supported by TagCentric

# of Application Clients running TagCentric ( on separate machines )

Page 14: Test harness for TagCentric Software

Testing Considerations –

Measures & Test Configurations

Poll Period

For one poll fake reader reads x # of tags in unit time

y # of polls = (x*y) # of tags in unit time

So, Poll Period determines maximum possible throughput!

But, how much is throughput? depends on the # of polls

made by the fake reader in unit time at that poll period

Then, how to determine # of polls?

Page 15: Test harness for TagCentric Software

Testing Considerations –

Measures & Test Configurations

Observation (with help of probes) :

Fake reader reads

In 1 poll 2 tags

(or 1 Tag Read = 2 Tags)

If unit time = 1 second, then:

# of polls in 1 sec at a given PP = 1000 ms / PP (ms)

Therefore, throughput (tags/sec)

= # of polls made in 1 second at given PP * 2 tags per poll

Page 16: Test harness for TagCentric Software

Testing Considerations-

Measures & Test Configurations

Agents

ReaderTAGS

Collect

Tag

Data

Store

Tag

Data

Emit /

Multicast

Tag Data

DatabaseRA

DB_A

Test

Panel

* Emitting Data and Storing Data * Dropped Tags & Duplicate Tags

Page 17: Test harness for TagCentric Software

Testing Considerations-

Measures & Test Configurations

Throughput Calculation

Dropped Tags General Concept: Tags that are emitted but not stored in the DB

Our Concept: Difference between Expected and Observed Throughput

Duplicate Tags : Tags having same Tag ID and Timestamp as other Tags

Possible Failure Points Sending Side – While Emitting Data ( Are there any packets dropped after collecting

data yet before emitting data? )

Receiving Side While Collecting Emitted Data ( Are there any packets dropped due to Thread Contention? )

While Storing Data ( Is Database inserting the required # of Tags? )

Ideal Throughput can be calculated using the following formula:

Number of Tags read by a fake reader in one second=

(# of polls made by the fake reader in one second) *

(2 tags per poll) *

(Run Time or Test Time, in seconds, over which tags are being sent

across the fake reader for scanning purposes)

Page 18: Test harness for TagCentric Software

Performance Test –

Description

Test is conducted using a single fake reader

Every test case of the Performance Test has the JVM Heap Size set to its default value: 64 MB.

Test is first carried out under two configurations Single (isolated) system – the TagCentric reader and the DB server

run on the same machine

Distributed system – the TagCentric reader runs on one machine and the DB server on another.

Test is repeated for different Test Times (Run Times): 1 minute, 5 minutes, and 15 minutes.

In each run of the Test, results are measured and calculated for the following Poll Periods: 1000ms (1 second), 500ms, 200ms and 100ms.

Page 19: Test harness for TagCentric Software

Performance Test –

Results & Analysis

Poll Period

[ in

Milliseconds]

Expected # of

Tag Reads =

# OF POLLS*

TEST_TIME

_IN_SECS

(E#TR)

Calculate

Observed

# Of

Tag Reads

Emitted

(O#TR)

See LOGS

DROP COUNT =

# OF Tag

Reads Missed

By Fake Reader

=[ (E # TR ) –

(O # TR) ]

Calculate

# of Duplicate

Tags

(# OF DUPS)

See LOGS

Expected #

of DB Rows

=

[(O #TR *2 )

- # OF DUPS}

(E # DB

ROWS)

Calculate

Observed # of

DB Rows =

(O # DB ROWS

upon Query- # of

Dummy Tags)

Query DB

Average Average Average Average Average

1000 ms 60 59 1 6 112 113-1

=112

500 ms 120 117 3 12 222 223-1

=222

200 ms 300 293 7 26 560 561-1

=560

100 ms 600 572 28 62 1082 1083-1

=1082

E # DB rows = O # DB row => Expected output = Observed Output (after the Drop in the # of Tags read)

Conclusion: DB is not a problem!

Sample Performance Test Result where Test Time = 60 sec

Page 20: Test harness for TagCentric Software

Fake Reader's Performance w.r.t Poll Period & Test

Time in Distributed System Environment

0

4 16

108

2

26

84

104

018

434

050

100150200250300350400450500550600650700750800

1000 500 200 100

Poll Period (in ms)

No

. o

f T

ag

s m

issed

by a

Fake

Read

er

No. of tags missed by a Fake reader(Test Time 1 Min)

"No. of tags missed by a Fake reader(Test Time 5 Min)

"No. of tags missed by a Fake reader(Test Time 15 Min)

Fake Reader's Performance w.r.t Poll Period & Test

Time in Single System Environment

56

614

210

54

214

2

28

796

4

218

050

100150200250300350400450500550600650700750800

1000 500 200 100

Poll Period (in ms)

No

. o

f T

ag

s m

issed

by a

Fake

Read

er

No. of tags missed by a Fake reader(Test Time 1 Min)

"No. of tags missed by a Fake reader(Test Time 5 Min)

"No. of tags missed by a Fake reader(Test Time 15 Min)

Performance Test –

Results & Analysis

Isolated System Distributed System

Low values of Poll Period leads to higher # of Dropped Tags!

For the same PP & TT values, Drop Count is less in the case of Distributed System!

Larger values of Test Time lead to larger # of Dropped Tags!

Analyzing the effects of Poll Period and Test Time on TagCentric performance

Page 21: Test harness for TagCentric Software

Performance Test –

Results & Analysis

TagCentric’s Throughput Vs Poll period

112222

560

1082

0

200

400

600

800

1000

1200

1000 ms 500 ms 200 ms 100 ms

Poll period (in ms)

No

. o

f T

ag

s r

ea

d b

y a

Fa

ke

Re

ad

er in

to

th

e d

ata

ba

se

No. of Tags inserted into

the database by a Fake

Reader

TagCentric’s Throughput Vs Poll period

112225

563

1125

0

200

400

600

800

1000

1200

1000 ms 500 ms 200 ms 100 ms

Poll period (in ms)

No

. o

f T

ag

s r

ea

d b

y a

Fa

ke

Re

ad

er in

to

th

e d

ata

ba

se

No. of Tags inserted into

the database by a Fake

Reader

Throughput is obviously high in case of low Poll Period values but …

… Throughput is little larger if we distribute the application client and database server to run on two separate machines rather than on a single system!

Isolated System Distributed System

Analyzing performance of TagCentric from a Throughput perspective

Ideal Throughput can be calculated using the following formula:

Number of Tags read by a fake reader =

(# of polls made by the fake reader in one second) *

(2 tags per poll) *

(Run Time or Test Time, in seconds, over which tags are being sent

across the fake reader for scanning purposes)

Page 22: Test harness for TagCentric Software

Performance Test –

Conclusions

Failure Point perspective Max. # of Tags dropped by fake reader @ PP = 100 ms

# of Dropped Tags is least (nil) @ PP = 1000 ms or larger values

Faster read rates (higher PPs) result in larger # of Dropped Tags

Longer Test Times cause more number of tags to be dropped

Drop Count seems to be relatively less in the case of a Distributed Environment

Tag Drop Occurring at the Sending side: Tags are missing at the point of Emission, even

before data is stored in the Database

Reason (s): Thread Contention : Multiple fake readers hitting the network stack at the same time!

UDP: Multicast Socket is swarmed by large data and queue overflows

But our measurements disprove the theory

Performance Test is conducted only for a single fake reader and not for multiple fake readers to suspect Thread Contention

If packets were dropped, why isn’t the DB reflecting the drop?

Page 23: Test harness for TagCentric Software

Performance Test-

Conclusions

TagCentric’s code logic assumes that Polling actions take no

time to emit Tag Data but in real-world, polling tasks do take

some time to emit Tag Data!

Poll period observed is not the same as advertised!

Gauge the logs to see if time gap between tag reads is longer than the

poll period that is set for the test

For longer poll periods, time gap is longer than the set poll

period and not consistent for all the tag reads too – possible

due to CPU congestion

We are measuring a “throughput reduction” but not a tag

drop!

Page 24: Test harness for TagCentric Software

Tuning JVM Heap Test –

Description

Latest version of JVM is recommended for use: Java HotSpot Client VM of build <1.6.0_05 - b13> was used for this test.

JVM Heap size is tuned from 10 MB, 16 MB, 32 MB, 64 MB, ….1024 MB. We measure: # of Tags inserted into the database (# of DB rows)

# of Fake Readers TagCentric can support without breaking

Min & Max. heap sizes are set to same value to minimize garbage collections –that is, Xms = Xmx

H/W configurations used for the test: 1CPU, 2CPUs; 512 MB, 1 GB & 2 GB RAM Any relationship between RAM memory and JVM Heap size?

Does more memory lead to more scalability?

Resource consumption statistics provide measure of performance

Tools Used JConsole, Application Manager, RAMPAGE, Windows Network Task Manager,

Bandwidth Monitor

Probes inserted in the ReaderPanel.java and FakeReader.java files

Page 25: Test harness for TagCentric Software

Tuning JVM Heap Test –

Results & Analysis

Testing for TagCentric’s Scalability while tuning the JVM Heap

Scalability graph is not linear w.r.t JVM heap size

From a peak point onwards, capacity decreases

TagCentric supports more # of fake readers when RAM memory is increased 22 readers @128 MB JVM,

512 MB RAM

42 readers @256 MB JVM, 1 GB RAM

TagCentric's Capacity (in terms of # of Fake Readers)

w.r.t JVM Heap Size (Test System:

1 CPU, 512 MB RAM Vs 1 CPU, 1 GB RAM)

19 20 20 21 22 20 20 21

3137 40 40 41 42 39 40

0

10

20

30

40

50

10 16 32 64 128 256 512 1024

JVM Heap Size (in MB)

No

. o

f F

ake R

ead

ers

Tag

Cen

tric

can

Han

dle

No. of Fake Readers (512MB RAM) No of Fake Readers( 1GB RAM)

Page 26: Test harness for TagCentric Software

Tuning JVM Heap Test –

Results & Analysis

TagCentric's Performance (in terms of %CPU

Utilization) w.r.t JVM Heap Size (Test System:

1CPU, 512 MB RAM Vs 1 CPU, 1 GB RAM))

20

34 33 32 3430 27

47

31 32 29 27 30 28 27 27

0

10

20

30

40

50

10 16 32 64 128 256 512 1024

JVM Heap Size (in MB)

% C

PU

Uti

lizati

on

% CPU Utilization (512MB RAM) % CPU Utilization (1 GB RAM)

TagCentric's Performance (in terms of Memory

Utilization) w.r.t JVM Heap Size (Test System:

1CPU, 512 MB RAM Vs 1 CPU, 1 GB RAM))

9 5

32

11 15

31 2839

413

24

56

24 29

13

32

0102030405060

10 16 32 64 128 256 512 1024

JVM Heap Size (in MB)M

em

ory

Uti

lizati

on

(in

MB

)Memory Utilization (in MB)(512 MB RAM)

Memory Utilization (in MB)(1 GB RAM)

Testing for TagCentric’s performance (Resource Usage) while tuning the JVM Heap

Above graphs show the trends of resource usage while tuning the JVM heap.

There should be some optimal value of JVM heap that results in average performance from TagCentric in terms of resource consumption.

Page 27: Test harness for TagCentric Software

Tuning JVM Heap Test –

Results & Analysis

TagCentric's Performance (in terms of # DB Rows)

w.r.t JVM Heap Size (Test System: 1 CPU, 512 MB

RAM Vs 1 CPU, 1 GB RAM))

13054 11703 14681 15235 15265 14627 13993 17504

2916037391

45938 45213 49160 46323

3751042828

0

10000

20000

30000

40000

50000

60000

10 16 32 64 128 256 512 1024

JVM Heap Size (in MB)

Nu

mb

er o

f D

B

Ro

ws

# DB Rows (512MB RAM) # DB Rows(1GB RAM)

Testing for TagCentric’s Performance (Throughput) while tuning the JVM Heap

Throughput or the data that fake readers read into the Database can be determined by observing the number of rows inserted into the Database.

At a JVM Heap size of 128 MB, maximum throughput is achieved (in both the cases) and from there onwards, there is a drop in the # of tags read into the Database

Adding memory provides more room for the large throughput read by the fake readers of TagCentric!

Page 28: Test harness for TagCentric Software

Tuning JVM Heap Test –

Results & Analysis

JVM

Heap

Size

In Bytes

# of Fake

Readers_

Supported

Available_

RAM_Before

ApplnStart

(in MB)

Exception

Thrown

Time

DB’s Last

Tag Inserted

Time

10 MB 31 443 3:21:53.842 3:21:54.514

16 MB 37 430 3:47:32.171 3:47:32.467

32 MB 40 430 4:04:21.014 4:04:21.482

64 MB 40 430 4:21:46.436 4:21:46.686

128 MB 41 606 4:47:46.530 4:47:46.858

256 MB 42 518 5:15:45.905 5:15:46.327

512 MB 39 502 5:41:46.296 5:41:46.796

1024 MB 40 502 6:09:26.374 6:09:26.78

Case 1: 1 CPU, 512 MB RAM Case 2: 1 CPU, 1 GB RAM

Relationship between RAM memory & JVM Heap

As long as Available RAM memory < JVM heap size set => Decrease in Scalability!

For 512 MB RAM (physical memory), JVM Heap =128 MB results in max. capacity from TagCentric; For 1 GB RAM of physical memory, JVM Heap = 256 MB is optimal.

DB / Agent Communication does not break; Fake Reader breaks

JVM

Heap

Size

In Bytes

No. of

fake

Reader

Support

Available

RAM

Before

Appln.

Start

(in MB)

Exception

Thrown

Time

DB’s Last

Tag Inserted

Time

10 MB 19 154 10:00:02.578 10:00:04.453

16 MB 20 134 10:19:48.359 10:19:48.828

32 MB 20 152 10:39:03.265 10:39:03.812

64 MB 21 173 10:55:22.453 10:55:23.828

128 MB 22 187 11:07:32.859 10:7:33.734

256 MB 20 194 11:18:34.015 11:18:35.046

512 MB 20 186 11:35:57.640 11:35:57.812

1024 MB 21 191 11:49:16.953 11:49:17.765

Page 29: Test harness for TagCentric Software

Tuning JVM Heap Test –

Conclusions

Small values of heap cause TagCentric system to fail early

Large values of heap cause more garbage collections

Ideal values of JVM required:

Set Xms=Xmx to minimize garbage collections

For maximum performance & scalability, set JVM Heap size = ¼ * (RAM

memory provided to the system)

Ex: For a machine of 1 GB RAM, optimal value of JVM heap = 256 MB

As JVM heap sized is increased, scalability of TagCentric also increases, but very large values of heap decrease the scalability and performance levels of TagCentric.

TagCentric is memory bound! Adding RAM increases scalability & performance

How many readers fit into one JVM at the maximum? – let us also look into Scalability Test and its results.

Page 30: Test harness for TagCentric Software

Scalability Test –

Description

Phase-1: Test whether or not TagCentric is CPU bound

Phase-2: Testing for vertical scalability Test the effects of hardware configuration on the capacity of

TagCentric (# of fake readers it can handle) 1 CPU (Single CPU), 512 MB RAM

1 CPU (Single CPU), 1 GB RAM

2 CPUs (Dual Processor), 512 MB RAM

2 CPUs (Dual Processor), 1 GB RAM

2 CPUs (Dual Processor), 2 GB RAM

JVM heap is set to its optimal value based on system’s RAM memory

Poll Period too is ideally set to retrieve best performance (least drop in # of tags) from TagCentric = 1000 ms

Page 31: Test harness for TagCentric Software

Scalability Test –

Results & Analysis

Identifying whether/ not TagCentric is CPU Bound

58

21

0

10

20

30

40

50

60

70

1 CPU 2 CPU

Number of CPUs

%C

PU

Uti

lizati

on

% CPU Utilization

Identifying whether/ not TagCentric is CPU Bound

98

45

0

20

40

60

80

100

120

1 CPU 2 CPU

Number of CPUs

Mem

ory

Uti

lizati

on

in

MB

Memory Utilization in

MB

Phase-1: Testing for TagCentric’s performance when the # of CPUs is changed

CPU utilization is less in the case where there was an addition of CPUs -# of Processors does make a difference in performance

Case 1: 1 CPU, 512 MB RAM Case 2: 2 CPUs, 512 MB RAM

Page 32: Test harness for TagCentric Software

Scalability Test –

Results & Analysis

Analyzing Scalability of TagCentric (in terms of # of Fake Readers it can

handle) w.r.t System's Hardware Resources, # of CPUs & RAM Memory (Poll

Period = 1000 ms, a constant)

73

144157

169195

0

50

100

150

200

250

1 CPU, 512M B

RAM ,128 M B JVM

1 CPU, 1GB RAM ,

256 M B JVM

2 CPUs, 512 M B

RAM ,128 M B JVM

2 CPUs, 1 GB RAM ,

256 M B JVM

2 CPUs, 2 GB

RAM , 512 M B JVM

Systems' Hardware Configuration

Nu

mb

er

of

Fake

Read

ers

th

at

Tag

Cen

tric

can

Han

dle

# of Fake Readers that TagCentric supports

Phase-2: Testing TagCentric’s scalability with the addition of hardware

Page 33: Test harness for TagCentric Software

Scalability Test –

Conclusions

Required

Level of

Scalability

in # of Fake

Readers

Minimum

Required

Environmental

Settings

Minimium System

Hardware

Configuration

Required

0 – 20 64–128 MB JVM

PP= 200 ms

1 CPU, 128-256 MB RAM

20-40 128 MB JVM

PP= 200 ms

1 CPU, 512 MB RAM

40-70 128 MB JVM

PP=1000 ms

1 CPU, 512 MB RAM

70-130 256 MB JVM

PP=1000ms

1 CPU, 1 GB RAM

130-150 128 MB JVM

PP=1000ms

2 CPUs, 512 MB RAM

150-170 256 MB JVM

PP=1000ms

2 CPUs, 1 GB RAM

170- 190 512 MB JVM

PP =1000ms

2 CPUs, 2 GB RAM

A rule of thumb guide for desired levels of scalability & performance can be seen in this table

Addition of hardware increases the scalability and performance of TagCentric - vertical scaling is positive

Up to 190 fake readers can fit into a single JVM and can be supported by TagCentric without breaking – By adding more hardware this number could be increased further

Memory and CPU serve as bottlenecks for the scalability and performance from TagCentric

Page 34: Test harness for TagCentric Software

Load Test –

Description

Phase -1: Compare the results in an isolated system with

those of a distributed system (load of 1 application client) and

identify which case results in an improved level of scalability

Consider for comparison, the best case of results of an isolated system

from the Scalability Test

Phase -2: Test whether horizontal scaling (# of application

clients) of TagCentric is positive or negative

Case-1: 1 TagCentric Client and 1 Database Server

Case-2: 2 TagCentric Clients and 1 Database Server

Page 35: Test harness for TagCentric Software

Load Test –

Results & Analysis

System

Role

Memory

RAM,

HDD

Capacity

# Of

CPUs

# of Fake

Readers_

Supported

DB Size

In #

of

Rows

%CPU

Utilization

Memory

In Bytes

Single

System

1 GB,

80GB2 169 2,95,861 49% 149 MB

System

Role

Memory

RAM,

HDD

Capacity

# Of

CPUs

# of Fake

Readers

DB Size

In #

of Rows

%CPU_

Utilization

Memory

In Bytes

DB Server 1 GB,

80GB2 149 3,03,971 41% 132 MB

Appln.

Client

1 GB,

80GB2 149 ----------- 11% 14 MB

System

Role

Memory

RAM,

HDD

Capacity

# Of

CPUs

# of Fake

Readers

DB Size

In #

of Rows

%CPU_

Utilization

Memory

In Bytes

DB Server 1 GB,

80GB2 165

(85+80)

5,02,514 51% 239 MB

Client-1 1 GB,

80GB2 85 ---------- 28% 199 MB

Client-2 1 GB,

80GB1 80 ---------- 45% 47 MB

Case-1 & Case-2:

Distributed system results in less scalability but higher throughput over an isolated system.

Performance (resource usage) is better in the single system than the distributed system

Case-2 & Case-3:

TagCentric grows in capacity with an increase in the # of users (positive horizontal scaling)

Distributed system is not as cost-effective! – more hardware and resource consumption

Page 36: Test harness for TagCentric Software

Understanding TagCentric –

Other Results

Using the following tools for testing, additional results about the behavior of TagCentric were identified:

JConsole Diagnostic tool

NetBeans Profiler

JProbe Code Profiler

Results were:

There are no Memory Leaks / Deadlocks / Looping Threads within the application’s code.

There are no Pending Finalization objects.

OOME (Out Of Memory Error) occurs due to insufficient memory which in turn occurs due to inappropriate JVM heap size setting.

Thread Contention is evident in the application’s code due to which a low performance from TagCentric might be possible.

Page 37: Test harness for TagCentric Software

Conclusion –

Summary

Developed a test harness for TagCentric – a suite of test cases and tools that helps us instrument and better understand the performance and scalability of this complex code base

Demonstrated the usefulness of the test harness TagCentric’s safety lies in slower read rates of a fake reader (low poll periods).

Advertised Poll Period is not the same as Observed Poll Period – This might lead to reduction in throughput at each poll period.

Thread Contention might little effect of JVM tuning on the performance of TagCentric.

Increasing hardware increases TagCentric’s capacity to a large extent.

TagCentric scales well to its environmental changes as the workload increases, by rightly configuring the hardware resources, JVM heap and poll period values.

In summary, this thesis provides a TagCentric scalability and performance guide that will be useful to users (developers who deploy RFID middleware) as well as TagCentric’s own developers, by helping them to tune (right size) a system configuration to derive acceptable scalability and performance levels.

Page 38: Test harness for TagCentric Software

Conclusion –

Future Work

Need to test TagCentric with larger workloads (more application clients and systems clustered)

What if a large number of fake readers are run and vast amount of data is being generated at same time– would we kill the Database? – Database Testing would be a nice to have component in the development of a Test Harness

Need to add security and reliability to TagCentric

Access control, authentication, intrusion detection, and sensor monitoring

Redundancy schemes to avoid data loss from single points of failure

More tests will be needed if we extend TagCentric for new purposes

Ex: Real-time Asset Location Management, Event Notification and Response to Shippers

Our Test Harness is manual – an automated testing framework would be better

Page 39: Test harness for TagCentric Software

QUESTIONS?