36
In depth articles: Review of Mobile Apps Whats new in the mobile apps world. App Reviews: Short Articles: Easier Device Certificate Management by integration SCEP into Zenworks Mobile Management Role of EQ (emotional quotient) in Workplace Productivity Datomic Java WBF CSS AngularJS Reactive Progamming on JVM Automation of web service secure token communication using advanced features of SOAPUI The Importance of Capacity Planning and Monitoring NoSQL (Not Only SQL) Data Warehouse DMS Breaking the Paradigm - Strategies to Test Bits and Bytes: AUTOMATION P R O G R A M M I N G Security Challenges and Solutions Enterprise Usability: Beyond the Button Datomic for historical Data reporting IDC Technical Journal IDC Technical Journal Where technology starts Issue 04 2014

IDC Technical Journal Issue 4

Embed Size (px)

DESCRIPTION

Issue 4 of IDC Technical Journal

Citation preview

Page 1: IDC Technical Journal Issue 4

In depth articles:

Review of Mobile AppsWhats new in the mobile apps world.

App Reviews:

Short Articles:

Easier Device Certificate Management

by integration SCEP into

Zenworks Mobile Management

Role of EQ (emotional quotient)

in Workplace Productivity

D a t o m i c J a v a W B F C S S A n g u l a r J S

Reactive Progamming on JVM

Automation of web service secure

token communication using advanced

features of SOAPUI

The Importance of Capacity

Planning and Monitoring NoSQL

(Not Only SQL)

Data Warehouse DMS

Breaking the

Paradigm - Strategies to Test

Bits and Bytes:

AUTOMATION

P

R

O

G

R

A

M

M

I

N

G

Security Challenges and Solutions

Enterprise Usability: Beyond the Button

Datomic for historical Data reporting

IDC Technical JournalIDC Technical JournalWhere technology starts

Issue 04 2014

Page 2: IDC Technical Journal Issue 4

EDITORIAL BOARD

Jaimon JosePradeep Kumar Chaturvedi

Shalabh Kumar Garg

CONTRIBUTING EDITORS

Archana TiwaryMridula Mukund

Vijay Kulkarni

COVER DESIGN

Sandeep Virmani

BOOK COMPOSITION

Arul Kumar Kannaiyan

Page 3: IDC Technical Journal Issue 4

InDepth Articles1 Improve Mobile Application Performance Anmol Rastogi

3 Automation of Web Service Secure Token Communication by Using Advanced Features of SoapUI

Anup Kumar R

7 Enterprise Usability - Beyond the Button Harippriya Sivapatham

9 A structured way of executing Exploratory Testing

Harish Mohan

11 Responsive Design Nirmal Balasubramanian

13 Historical Data Reporting and Datomic Rajiv Kumar

17 Multi-Factor User Authentication by using Biometric Identification – Windows Biometric Framework (WBF) Case Study

Sachin Keskar

19 Innovation Sparks - 1 GNVS Sudhakar

21 The Importance of Capacity Planning and Monitoring NoSQL (Not Only SQL) Data Warehouse DMS

Sudipta Roy

27 Asynchronous Reactive JVM Programming Sureshkumar Thangavel

23 Smart Testing Ideas Vikram Kaidabett

Miscellaneous5 Bits & Bytes: Breaking the Paradigm - Strategies

to Test Vijay Kumar Kuchakuri

ii Editorial: Learning from the Hackathon Jaimon Jose

Reviews6 Mobile Apps (Android) Karthik

Short Articles16 MDM Certificate Management – Using SCEP the

right way Ashok Kumar

25 A Role of Emotional Intelligence Quotient (EQ) in Workplace Productivity

Radha Devaraj

Contents

IDC Tech JournalVolume 4 – August 2014

Page 4: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

iv

EDITORIAL

Learning from the Hackathon

Hac

kath

ons w

ith a

colla

bora

tive s

pirit

can

foste

r an

envi

ronm

ent o

f su

cces

s and

triu

mph

ove

r lon

g odd

s and

obs

tacle

s in

a co

mpe

titiv

e en

viro

nmen

t. We recently concluded a very successful Hackathon where more than 100 engineers

sweat it out 2-4 days to build something that they nurtured over a period of time. This was the first time we conducted such an event though innovation and freedom to do something different have always been encouraged. There are a few interesting takeaways from this event.

Number of ideas submittedOrganizers kept the idea submission page public intentionally. It was interesting to note the rate at which new submissions came in. Though engineers were initially con-fused with what qualifies for a Hackathon it appeared that reading through already sub-mitted ideas gave them an opportunity to review their proposals and generate more such ideas. Our friends generated over 100 ideas by the time we started hacking with over 150 engineers participating in the event.

Participation Overall, hackers felt the arrangements in the hackdens and other conveniences helped them to focus and deliver what they intended to do in a short duration. Attention defi-ciency is common in our workplaces today. Such events help participants to get uninter-rupted time and focus on their ideas. In two days, they had to connect with their team-mates, arrive at a consensus on their idea, assign tasks and avoid overlapping.

Impact of the workOne may not expect to solve a big problem in a Hackathon or releasing a product after being holed up in a hackden for a couple of days. They can make small dents in some of the most intractable problems. They get to test out a new idea. More importantly, they realize their potential and getting the work done within time constraints boosts their confidence. It’s important to note the energy such events usher into the team. People felt invigorated and recharged to do more.

Quality of the workThough Hackathon is meant for unlock-ing creative skills, there were proposals that ranged from a mix of complex problems to simple bug fixes. The hackers completed 63 ideas in two days of Hackathon (some mem-bers worked over the weekend!) and out of these, 8-9 was really worth considering as major features of various products. This is an encouraging number because this was our first Hackathon.

In short, such Hackathons will do good for the organization and participation in such events will motivate everybody and boost their confidence. Teams should explore ways and means to implement similar events at their team level. They are great for generating insights into how you work and assess your capability when you are completely devoted to a single pursuit.

Jaimon Jose

Page 5: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

1

View PointNow a days, applications are deployed on almost every mobile devices like iPhone, Android. These applications provides the con-tinuous accessibility to online activities like access to broadband media via sites like YouTube, and visit to popular social network-ing sites like Facebook, MySpace and LinkedIn, and even financial transaction can also be performed.

Of course, the application performance depends on few major factors like Phone processor, RAM etc but the behavior of the wireless network is other important factor to deliver the continues online accessibility at ease. This paper is intended to understand the network factors which have impact on application perform-ance. Also in this paper, there are strategies which can be followed to mitigate the network factors.

Data vulnerability through the most common type of network trafficThere are few most common type of network traffic outlined below which could lead to the slow response time and loss the productivity.

y Social Networking, Online Gaming, Broadband Mediay Social networking applications are increasingly used for

sharing text, photos, personal profiles, videos and more hence a major factor for data explosion

y Deliberately misleading Applicationsy Application running on client/server architecture and

required connectivity each time hence slow the network and causes the UI to become less responsive.As the number of smart mobile users grows drastically from

past 2-3 years and due to the data connectivity either from Wi-FI

or 2G/3G network, there are more chance for data explosion which could lead to the application performance down if not tested well.

Application (Client/Server) Performance through data deliveryy The data delivery to the devicey This process involve the request from the device to the server

for data which leads the response generation at the server and then sending the response back to the device. In a process to revive the request and sending the request back, there few typi-cal factors involve like load on the server, type of network con-nection and load on the network connection. These factors can be actually tested well using the emulators. These emulators are easily available free.

y Data display at the source devicey This process involve the display of data which received from

the server. There are various factor include to display the data in efficient way like the type of OS used, configuration of the device. This is out of the scope and will not be discuss in this document.

The Network FactorsThe network on which the application is used impacts perform-ance tremendously, especially for mobile and cloud. These factors are as follows:

y Inconsistent bandwidthy There are multiple user connected to the same network at

the same time thus reducing the available bandwidth of the network.

y High jitter and increased latency

Anmol RastogiA Specialist @Novell in Product Development with 11 years of experience. I received M.C.A. degree from UP technical university in 2003 and eventually earned my spot as “Senior Software Engineer” with Symphony Software Services.

Improve Mobile Application Performance

Page 6: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

2

y As most of the end user applications used over the Internet are based on TCP, latency is crucial to the broadband experi-ence. TCP requires the recipient of a packet to acknowledge its receipt. If the sender does not receive a receipt in a certain amount of time (ms), TCP assumes that the connection is congested and slows down the rate at which it sends packets. Short data transfers (also called Web “mice”) suffer more from reduced TCP performance.

y Packet lossy As there are more request to process the response received

from the server over the network at the same time, this causes the more congestion on the network and open up the possibil-ity of packet loss. As most of the application are using TCP/IP

sopacket loss is taken care but until the data is not received by the application UI may freeze.

Impact due to the network inconsistencyThere are various impact due to network inconsistency and dis-cussed below

y UI responsivenessy During the peak when the latency is increased, the user inter-

face freezes and become less responsive thus frustrating the users.y Data synchronizationy If there is any network failure, any transaction initiated from

the device, it’s difficult to synchronize the transactions over the failed network.

y Functionality issuey Network congestion, packet loss, slow response, all of these

may cause the functionality issue of an application

Mitigating the impact due to Network variabilityThere are ways of testing which can be performed to improve the application performance as follows:

y Recreating the real client/server communication conditiony This involves identifying and recreating the conditions

as expected on the production servers. There are mul-tiple conditions that need to be recreated. Below are the conditions

y Recreate the real network conditiony This involves identifying and recreating the conditions on

the network to be targeted, while gauging the application’s performance on the target deviceAfter recreating the real-world conditions, the mobile app

developer should measure the performance of the application for every delivery mode along with the other hardware and software components involved. It is important to measure the performance on every component as that will provide an end-to-end view of the application’s performance and not just an isolated one-device performance perspective.

Condi!on Recreated By Available Solu!onWorkload type in terms ofwhere the requests aregenera!ng from Example:Web applica!on using na!veiPhone and Androidapplica!on

Create the scripts specific to aworkload type

Any load tes!ng solu!on (HP LoadRunner, Soasta CloudTest,etc.)

Load on server in terms of number of users from different workload types Example: 200 from Web applica!on, 50 each from iPhone and Android na!ve apps

Create load tes!ng scenario specific to the load and the associated scripts for a workload ty

Any load tes!ng solu!on (HP LoadRunner, Soasta CloudTest,etc.)

Load on server in terms of number of users from different geographic loca!ons Example: 50% fromUS, 50% ROW

Generate load from the load generators at the iden!fied loca!ons

Cloud-based load tes!ng solu!on (Soasta CloudTest, Gomez Web Load Tes!ng, Keynote Web Load Tes!ng)

Condi!on Recreated By Available Solu!on

Network type and qualityExample: 3G/2G/WiFi -- average/best/worst

1. Emula!on2. Devices in real-

networks by mobile carriers or ISP

1. Infosys Windtunnel, Shunra

2. Keynote DeviceAnywhere, Gomez Synthe!c Monitoring

Network LoadExample: 50% bandwidth u!lized

Possible only by emula!on Any Network Emula!on Solu!on (Infosys Windtunnel,Shunra)

Network by GeographyExample: AT&T 3G in New York, Airtel 3G in Bangalore

1. Emula!on2. Devices in real-

networks by mobile carriers or ISP

1. Infosys Windtunnel, Shunra

2. Keynote DeviceAnywhere, Gomez Synthe!c Monitoring

Ever wondered if you could get a free US number so that friends and family can call you easily. Talkatone is an option that lets you pick a number of your choice and lets you text and receive calls freely. This service stands out in the crowd of other services such as Viber, Line, WeChat, Skype for the sound quality and ease of use. They charge a nominal Rs. 1 per minute for calling once your free monthly credit is over. Though Google Voice is an option if you are willing to pay a one time charge for a US number, it’s a pity that Google Hangout on Android doesn’t have the voice integration where as their IOS counterpart has full integration of Voice. Google Voice provides free unlimited US calls.

Page 7: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

3

IntroductionSomething which drove me real crazy over entire years of my test-ing career is to test the Web service secure communication. With the number of browser plug-ins, extensions, and stand-alone tools available, Web application testing has become far easier compared to when I started. Testing secure Web services could be difficult due to the lack of Web service testing tools available in the indus-try. But I would owe it largely to the lack of articles and solutions on various challenges in the Web service testing. This article is an attempt towards making the testing community aware about dif-ferent possibilities in testing Web services and its security with the help of SoapUI. SoapUI and SOAPSonar are two of the leading tools available for testing Web services. SoapUI has a free version, which made me select it over SOAPSonar.

In the 3rd edition of IDC Tech Journal, there was an interest-ing article titled “Web Services Testing Using SoapUI” by Girish Mutt. It was a nice introduction to SoapUI and its various features and that helps me to skip those stuffs. In this particular article, we would take a closer look at some of the specific advance fea-tures and other facilities available in SoapUI that can help you automate the complex requirements of Web service testing and its secure communications. The secure communication would cover the WS-Security and extension to it called as WS-Trust. This article will not explain WS-Security and its related standards. For more information about WS-Security and WS-Trust, see http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wss and http://docs.oasis-open.org/ws-sx/ws-trust/v1.4/ws-trust.html respectively.

SoapUI Test DesignLet us start with some basics. SoapUI can create SOAP-based, REST-based, or generic project. We will restrict this article to only SOAP-based project.

SoapUI ProjectThe first step is to create a SOAP project by importing the WSDL of the Web service.

During the creation of the project, SoapUI can automatically create the SOAP requests defined in the WSDL. For normal Web services, which do not have security requirements, these auto-matically created SOAP requests are sufficient to do testing. When the WS-Security has to be applied, it would need some additional modification. However, these requests give you the details of the endpoints to which requests need to be sent. Actual test requests with security headers can be built on top of these basic SOAP requests.

Typically a test suite is created under the project. A project can have multiple test suites. The test suite is nothing but a logical grouping of multiple test cases. A test case is made up of several steps, called TestSteps. The following is an example structure of a SoapUI project:

y Projecty Interface

Anup Kumar RQA Specialist currently working for NetIQ Access Manager. He is with Novell since 12+ years and has QA experience in various access management products and TCP/IP stack. He holds a masters in Software Systems.

Automation of Web Service Secure Token Communication by Using Advanced Features of

SoapUI

Page 8: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

4

y TestSuites y TestCases y TestSteps y LoadTestsy MockServices

In the structure, you can see interface, load tests, and mock-services. Load tests would be very minimally touched upon later in the article. However, other two are out of scope for this article.

Let us move into the details of designing a test case now.

SoapUI Test CaseSoapUI has many interesting features to help automate a Webservice test case. As earlier mentioned, test case is made up of several test steps. It is in the design of test steps, you will know the real strengths of SoapUI. A typical test case will have multiple steps. The very basic four types of test steps are:

a. Property Steps – Stores different properties that can be referred anywhere in the project.

b. Test Request – Actual requests to the server. The response from the server for each test request can be intercepted and later manipulated by SoapUI. It is the scope for customization available for manipulation in SoapUI, which makes this tool so useful.

c. Property Transfer – Helps moving properties between dif-ferent steps. This helps us create continuity between the test steps. It helps to transfer information from one test request-response to another request to maintain the continuity between multiple requests.

d. Assertions – Assertions can be made in test requests or can be used as a separate test step itself. It is used for validat-ing the response. It has in-built checks to check whether it is SOAP response, whether it is schema compliant, or whether it is not a SOAP fault. However, the real value add is using it with the XPATH match. This will help us validate a certain element in the response. The XPATH match can be used with the property transfer also where a certain ele-ment from the response can be transferred to subsequent requests.

With the basics covered, let us move to the advanced capabilities of SoapUI.

WS-SecuritySoapUI manages the WS-Security related configurations at the project level. This <what?> allows these configurations to be used in any part of the project. There are four different aspects of <WS-Security?> configuration:

y Outgoing WS-Security Configurations: It details about what should be applied to the outgoing requests. It can be used for encryption, signing and adding SAML, timestamp, and user-name headers.

y Incoming WS-Security Configurations: It details about what should be applied to incoming responses. It is typically used for decrypting and verifying the signature of incoming messages.

y Keystores: It details about the keystores used for encryption, decryption, and signing.

y Truststores: It details about the truststores used for signature verification.

ScriptingThe most important capability of SoapUI in my opinion is its scripting capabilities and the script library. SoapUI provides extensive options for scripting by using either Groovy (http://groovy.codehaus.org/) or Javascript. This article will touch upon only the scripting with Groovy.

Scripting can be used at various places in SoapUI:y Test step itself can be a groovy script that allows your tests to

perform virtually any desired functionality. Groovy, as a script-ing language, is very strong by itself in manipulating the XML documents in the SOAP requests and responses.

y For initializing or cleaning up different variables before and after your tests.

y For creating arbitrary assertions with the script assertion.JavaGroovy has a rich set of built-in libraries for XML manipula-

tion. However, if you find any difficulty in finding the right library from Groovy, you can write it in Java itself and those functions can be called from within the Groovy script. Here is how you do it:

y Compile the Java program (.java file) and package it in a JAR filey Put the JAR in soapui\bin\exty Restart the SoapUI

Now the groovystep in SoapUI can access this library by importing the package as follows:

import ExamplePackage.*

log.info “Package Imported”

You can also load a JAR at runtime with:this.getClass().classLoader.rootLoader.addURL �QHZ�)LOH�³¿OH�MDU´��WR85/���

WS-TrustThough WS-Trust is not supported by SoapUI yet, using the above mentioned advance features of SoapUI, we could automate 100% of the WS-Trust test cases by using SoapUI. In our WS-Trust test cases, SoapUI was used to simulate the Web service client.

The Web services in the WS-Trust environment would be protected and only the Web service requests with a valid token would be given access. So, the Web service client has to first request a Secure Token Service (STS) for a token and then present that token to access the actual Web service. A single test case in this scenario needs communication with the following two SOAP endpoints:

y STS endpointy Web service endpoint

Proof of ConceptIn this section, we will understand in detail about one of the test case in the SoapUI Test Automation for WS-Trust testing.

The test case we have selected for discussion is SAML1.1 Issuance Token Request. In this test case, you can see there are 12 steps altogether. Image icons against each test step identifies what type of a test step that is.

1. RST – Sample (1): This is a property step. The sample request XML file is stored in a property.

2. Transfer RST – Sample to Issue SOAP Request: This is a prop-erty transfer step where in the sample xml stored in the prop-erty is been transferred to the actual SOAP request.

Page 9: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

5

3. Issue: This is the actual SOAP request to the STS to obtain a SAML token. This step also uses the Outgoing WS-Security Configuration to add the security headers like usernamepass-word header and the timestamp headers into the request.

4. RSTR – Write to file: This is a Groovy step, where in the response of previous step is captured and wrote it to a file.

5. add-timestamp: Another Groovy step to add the current timestamp to the request. This groovy script uses groovy capabilities to create current timestamps in the required format.

6. WSP-Request-Sample(3): Another property step to store the sample request XML file. This property step also stores the starttimestamp and endtimestamp values created in the previ-ous groovy step.

7. Transfer request xml and timestamp to Request-before-signing: Another property transfer step where the stored request XML and timestamp values are transferred to the request which needs to be signed thereafter.

8. Request-before-signing(1): Another property step to store the intermediate unsigned XML document.

9. WSP – Request – Write to file: Another Groovy script to write the intermediate unsigned XML document to a file in the hard disk.

10. Signing the WSP-Request using Java Code: Another Groovy script, which actually is using functions from the imported Java package. The entire signing of the XML document is done using core Java code and that is been used inside Groovy. The signed XML document is written to the file system again within the same groovy script.

11. create the wsp request from the signed XML file: Another Groovy script which will actually prepare the actual request to the Web service.

12. WSP Request: This is the final SOAP request to the Web serv-ice. This step has three assertions also part of it. Assertion 1 validates the response is a SOAP response. Assertion 2 vali-dates the response is not a SOAP fault. Assertion 3 validates a particular element in the response for its correctness.

ReportingThe test suite for WS-Trust automation had around 100 test cases. SoapUI reporting gives both the live view of test execution as well the test execution summary.

ConclusionThe craziness which drove me in the beginning of testing Web services had changed to a satisfying affair at the end of this auto-mation. With the nice UI and extended capabilities of SoapUI, it definitely makes life easier in Web service testing arena. In addi-tion to the free version of SoapUI, there is pro version as well which adds few more super cool features to SoapUI. With the Groovy capabilities and option to use Java code within Groovy takes this tool to the next level where any types of complex testing challenges can be solved with ease.

Bits & BytesSometimes, if not always, it is good to break the paradigm.

1. Being Adhoc: In every sprint and test phase, we can probably allocate the first few hours or days (based on the length of the phase), about 10% of the time to Adhoc testing. Good Test Cases are those which give bugs. Completion is important and so, while you log the defects and get them fixed, you have the rest of the time with you to work on a whole lot of other tests.

2. List Your Defects First: Traditionally, we are all used to writing test cases, executing them and filing defects against the observa-tions/failures. Why not reverse this process once in a while? How about writing the defects first in a spreadsheet, then executing actions / working towards finding them in the product and build up the test suite towards achieving that goal. In short, it is defect - driven testing.

3. UI and Special Characters: Treat all special characters the special way atleast once. UI validation, Search, User login, etc. with all special characters in the keyboard we use may just yield right usecases.

4. Personified Tests: Imagine what an administrator would like to achieve. Administrator may have the following questions: a. Wants to deploy an update, look out for failures, debug and fix them. How easy is it to do that. b. Has triggered a quick task, has it been received by all devices. If not, why not. Do you know who triggered it (audit). c. Searches for all events (change/agent) which have a given target or initiator. d. Didn’t understand this feature. Is there correct information in the context-sensitive help and is it easy to correlate to what

is required? 5. Big Net with Small Holes: We need to be agile, adept and creative. We need to know multiple components to understand the

big picture. Switch components once in a while. Test what is not assigned. Cover areas which you don’t know. Seek information. Question the behavior. You might actually be touching upon the right areas/issues. We all need to have bigger nets with smaller holes to be able to garner a whole bunch of fishes of all sizes :)

Breaking the Paradigm - Strategies to Test — Vijay Kumar Kuchakuri

Page 10: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

6

Dormi (Monitors Babies)Developer: Jay Simon (Sleekbit’s Co-founder)

Are you a first-time parent and are you facing middle of the night cries for attention from your child?

Ever heard of Dormi? It is the app you should have in this mobile age.

Pre-Requisites

Two Android devices (Either smartphones or tablets anything that contains android version 2.3 and above)Internet connection

Operations1. Connect to Internet (such as Wi-fi, 3G/4G network etc)2. Download and install this app on both android devices.3. From first device, find that the other device is auto discovered

and simply generate a 5 digit password to pair with second device.

4. Make sure one device acts as child device and other as parent device.

5. Place the child device in your child’s bedroom and keep the parent device with yourself.

6. Monitoring the Child’s sleeping patterns, noises, distur-bances in the sleep etc. even when you’re away from the child through your device by sliding the circle button to the play button.

Features

If you want to talk to your child, there’s a push to talk button from the parent device (walkie-talkie style).

Parent’s device can be used for listening to noise and sounds detected in the child’s location.

Notifications are received on parent’s device when connectiv-ity gets lost between parent and child’s devices.

With this app, you can even enjoy a quick holiday or stay out of home.

Merits

Look and operational feel is smooth.UI design is great.Simply slide to begin the monitoring.It’s absolutely free for 4 hours a month and more using credit

points (For a trial version). (Paid version costs 7$).Available from Google Play store.

Rating based on My Experience:4.5/5

Reviews

Mobile Apps (Android)– Karthik

Page 11: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

7

Enterprise products are complex, powerful tools targeted for skilled users. The usability challenges of enterprise software are beyond building intuitive GUI. For exam-

ple, usability can be improved by making the configuration screens intuitive and easy to use. Extending this and creating a wizard using these screens to simplify frequently used functionality will take usability to the next level. Similarly simplified installation, useful alerts, better status feedback, etc. result in improved usa-bility. Enterprise products should focus on user centered design to help improve the user experience as a whole. Apple Inc. pro-claimed in its developer conference that they don’t build products, instead they build user experience. This trend is catching on. This article focuses on usability in the context of enterprise software.

Value of UsabilityLet us start with the big picture of why usability matters for enter-prise products, where the feature set is the king. The short answer is that usability has real economic consequences.

Total cost of ownership (TCO) of a product is a critical metric for organizations. Poor usability results in decreased productivity, is error-prone and requires excessive training. This increases over-head hosts and undermines the business benefits of the product.

Usable products translate into real cost savings for organiza-tions. Happy customers are repeat customers. This in turn helps software vendors by increasing the revenue from license renewals. Improved usability is a win-win in terms of revenue for the vendor and consumer.

User Centered Design User centered design is a philosophy that places the user at the center of the development process right from the beginning. The three main activities for usability aware development incorporat-ing usability are:

Understand Usersy Engineers are not users and the product manager is not your

user! Make no assumptions on users’ goals and priorities. Gather users’ actual needs.

y Create your own network of people who regularly interact with cus-tomers, like Technical Support, Consultants, and Pre-sales Engineers. They can provide information on the issues the users are facing and how the features will be actually used to solve those issues.User information will help to add right flexibility and make

right assumptions during design.

DesignUsability issues are usually discovered when the feature is ready to be tried out. It may be too late to make major changes at that point. Simply adding a vague ‘Improve User Experience’ to the requirements list will not help. Software architects require specific qualities they need to design for. This problem is solved by Usability Supporting Architectural Patterns (USAPs).

USAPs include a list of usability properties and patterns that have been known to improve the user experience. These replace

Harippriya SivapathamA specialist in the Access Manager team. She has 12 years of experience developing java based UI applications from end to end. Appreciated for initiatives to improve the usability and for simplifying the Access Manager Administration Console.

Enterprise Usability - Beyond the Button

Page 12: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

8

vague requirements with quantifiable ones that the architects can prioritize and support.

Usability PatternsUsability patterns are solutions for incorporating the usability principles in the design. Unlike software design patterns, they do not provide the implementation mechanism. They are merely concepts to use.

Some of the patterns are:y Wizards – for frequently performed tasks and for complex tasksy Error Correction y Undo – always provide a way to revert errors y Cancel – provide ways to reliably cancel incorrect actions or

long running actions. y Human readable error messages.y Error Prevention y User Confirmation y Data Validationy Feedback y Progress and status indication y Alerts

y Natural Mapping – obvious interface for the function it performsy Multitasking and Delegation

Usability Evaluationy Any testing is better than no testing. Perform usability testing

using anyone available. y When fixing problems, try to do the least you can do. Tweak but

don’t redesign. y The 80 – 20 rule applies: Fixing top 20% errors will most likely

help 80% of the users.

ConclusionMarket analysts say that the demand for usable enterprise software is snowballing. Moreover enterprise software companies can no longer sell software on the strength of feature set alone. Some of the world’s leading corporations are using the standard developed by the US National Institute of Standards and Technology (NIST) to make usability a key factor in their choice of software. NIST has developed the Common Industry Format (CIF standard) for usability test reports that allow buyers to compare product usabil-ity. Vendors who adapt fastest to the changing market will gain ground over their competitors.

References1. Steve Krug. Rocket Surgery Made Easy. Pearson Education

Inc, 2010.2. Eelke Folmer, Jan Bosch. Usability Patterns in Software

Architecture. University of Groningen, the Netherlands3. ComputerWeekly.com. Software usability tests can save you

millions.

Cam Scanner — Easily manage small paper documents

Cam Scanner is an intelligent document scanning and management app with sync features. The app performs as a mobile scanner with very good editing features. It has gained popularity over a period of time with over a million downloads on Android platform and about 6 million users around the world. Compared to other pictures of documents through camera apps, CamScanner enhances the text and graphics of the scanned documents to have a clear and sharp document. The app is especially useful for small docs like - business cards, payment receipts. It easily identifies the layout of documents on any flat surface and gives you an option to edit them. It is like carrying a small scanner in your pocket. There is also an option for optical character recognition (OCR), but like most OCR systems, this is just about ok. Scanned documents can be exported as PDF or JPEG files or merged with other PDF documents.

Most of the times, I use it to backup important documents and save them in google drive. This way all my important documents are safe and readily accessible. The best part is that the free version of the app provides all the basic features required for normal usage. The app is available on all the major platforms (Android/ iOS /Windows Mobile).

Page 13: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

9

Exploratory Testing (ET) is a creative way of testing. The most important difference between exploratory test-ing and scripted way of testing is that the scripted test-

ing talks about organized and structured way of conducting test execution and tracking using pre-populated test cases & reporting mechanisms. However, ET is widely considered as an ad-hoc way of conducting test execution with no prior test cases and definitely no tracking mechanisms.

The benefits of ET are high because it takes out the monotony in the testing job and provides space to testers so that they can be more creative with their testing. It gives them a chance to do things differently that could expose weak areas of the products. But since these are not scripted, this type of testing comes with a caveat of being ad-hoc and difficult to quantify. Does it actually have that problem? Well…

The need to track any engineering process is to make sure that the effort is spent in the right direction and in the correct amount so that the output is maximum By the virtue of the nature, tracking exploratory testing is a bit challenging but it’s is far from impossible. If followed properly, exploratory testing is a highly manageable and creative way of testing.

One way of structuring Exploratory Testing is by time box-ing it with proper mission and making it reviewable by employing Session Based Testing.

What is Session Based Testing? It’s a testing method where exploratory testing is done in small time boxed and manageable chunks with quantifiable results.

What is a Session?A session is a reviewable test effort done in a pre-decided format. Each session is associated with a mission followed by a summary called as Session Summary. This type of testing is all about the ses-sion, it’s not about test cases or a bug report.

The Mission: The mission of a session can be generic areas of the products that have to be tested or it can be specific functionali-ties of the product that have to be tested. This can also be specific test configurations like for example, testing a product with remote DB or it could be specific test strategies like performance testing or it could be configuration related testing. The agenda of the ses-sion is purely based on the tester and the mission. There should be no scripted test cases that have to be followed.

Length of a Session: A session can be of 90-120 minutes. It should be short enough to be interesting and should be long enough to achieve the mission of the session.

The Session report: At the end of every session, testers need to brief the test managers about their sessions. The session report should consist of the following sections:

1. Mission Statement (areas of product, test configuration, test strategies, system configuration related )

2. Tester Name3. Date Time4. Bugs Filed5. Issues or Impediments faced6. Notes

The report is created to capture the details of the session and it needs to be preserved for all sessions so that the efforts are tracked

Harish MohanHas 9 years of experience in the industry and is currently working on PlateSpin product suite.Holds Master’s degree from BITS and has extensive experience in virtualization domain. Apart from work pursues his photography interests and is an avid traveler.

A structured way of executing Exploratory Testing

Page 14: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

10

and also to make sure that there is no duplicate work that is hap-pening. The report template should be created before the session starts and has to be saved in a central location. The tester is sup-posed to enter in the details of the session towards the end of his session. However the mission of each session needs to be decided for a longer period or so, so that testers have choice to pick up their areas of interest for each session.

After every session the report is reviewed by the test manager in a brief review session.

It’s important to preserve these session based reviews for future references because if another tester picks up the same theme dur-ing a later session and avoids duplication.

The Session Review: The Test manager runs this review meet-ing completely and the agenda of the review meeting is to under-stand the following:

1. What was tested during the session?2. Were there any bugs or is there a difference in expectation?.3. Were there any impediments affected productivity?.4. Was everything that was called in the mission statement com-

pleted? Is there something pending? If yes, where and how will it be covered?

5. How do you feel about the area that you tested? What can be done to make you feel better about it?

6. Was any opportunity test-ing done? What was the result? – (Testing outside the scope).

Test Manager should try to probe and guide the tester working on a particular area of the product such that almost all possible ways related to that area are cov-ered. If the manager finds that a particular tester is no longer able to churn out defects in a particular area/module of the product, then test-ing for that product area/module must be halted for some time or assign a different tester to that area and proceed. In the latter case, the manager should brief the new tester and make sure that none of the already done testing is repeated and that’s where the mission statement & the review process become very important.

Over a period of time when a particular theme has been tested by most of the testers and the confidence of each one of them is considerably high, it indicates that the particular areas is robust and further sessionson that area can be terminated. The stake holders can rely on automation or regular sanity check of that feature.

Session Testing with Peers: In these sessions a common area is assigned to two testers where they test the areas independently during a session with a spirit of competition to churn out more issues on a particular area, however to make sure that the compe-tition is in the right spirit only one session report should be cre-ated. During the review process, both the testers must review the report together with the manager so that individual performance does not become an obstacle. This approach would bring in the advantage of having more eyes looking at a problematic area at the

same time. However this needs to involve people with the right mind set who can understand that the bout exercise is to benefit the product and uncover issues and not prove individual compe-tency levels.

Session based testing in the Sprint Model: In an ideal sprint team, the session report can be discussed within the standup meeting itself by the individual tester. The tester can be ques-tioned or asked for clarifications on some tests in the standup itself by developers and others. There is a high probability that those questions or clarifications can trigger more ideas for the testers. If the test lead or manager think differently, those details can be shared with the individual testers. Alternatively, the ses-sion review can happen with all of the QE team members in the sprint which includes the manager as well instead of having just with the test manager.

For projects that are having less or no scripted test cases: For the projects that have the flexibility to have test strategies com-pletely based on exploratory testing (no scripted test cases) this kind of session based testing can be followed right from the start and that can bring in a lot of clarity to the entire process. In these projects, these session can be planned per day basis may be 2-3 sessions per day.

In these type of projects, it’s important to have one more section in the session report that describes about the “time

taken” for testing a particular mission which captures time spent on designing a test plan and investi-

gating a problem than actually testing. In such cases, the effort has to be captured and reflected appropriately. In other words, with this type of testing judging a tester by the number of the bugs logged would not be right.

However for the projects that are having a mix of scripted and exploratory testing and are

following sprints, session based testing can be planned during the start of every sprint. In

this case the session can be per week basis may be 2-3 sessions per week. In this sce-

nario, the testers or test managers should make sure that whatever new scenarios

have been covered in these sessions are captured into their test case

database or mind map so that next tester who is doing a session on the same theme can look for dif-

ferent issues and avoid duplication.

T h e tester must be assigned a particular area of the pro- ductto carry out multiple sessions with that area. The number of sessions required for a particular area has to be reviewed with the Test manager and this must be improvised periodically.

For Projects that rely on more of scripted testing and have very less time for exploratory based testing: There could be projects that rely heavily on scripted testing and it is essential to run their test cases very meticulously because the testers in these projects do not get enough time to do much of exploratory testing. For such projects, session based testing can be planned once in the release cycle, may be dedicate 1 or 2 sprints only for this testing. Also, the review process must be followed very meticulously in early cycle to ensure that the test sessions are successful.

Page 15: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

11

Responsive Web design is the approach that suggests that design and development should respond to the user’s behavior and environment based on screen size,

platform and orientation. The practice consists of a mix of flex-ible grids and layouts, images and an intelligent use of CSS media queries. As the user switches from their laptop to iPad, the Web site should automatically switch to accommodate for resolution, image size and scripting abilities. In other words, the Web site should have the technology to automatically respond to the user’s preferences. This would eliminate the need for a different design and development phase for each new gadget in the market.

The idea of Responsive Web Design, a term coined by Ethan Marcotte, is that our Web sites should adapt their layout and design to fit any device that chooses to display it.

Why it is so important ?Majority of media consumption is screen-based.The spectrum of screen sizes and resolutions is widening every

day, and creating a different version of a Web site that targets each individual device is not a practical way forward. This is the prob-lem that responsive web design addresses head on.

The Advantages of using Responsive Wed Design are as follows:

y Your Web site looks great everywhere. (On all device displays)y You need not zoom on smaller devices to read the content.y Consistent tailored user experience.y All pages & functionalities are available on every device.

Google recommends webmasters follow the industry best practice of using responsive web design, namely serving the same HTML for all devices and using only CSS Media Queries to decide the rendering on each device.

Source: Google Developers

Anatomy of a Responsive Web siteA Responsive Web site targets the width of the web browser that each user is using to determine how much space is available and how the web content should be displayed.

Fluid/Flexible GridsMake your layout flexible. Flexible grids use columns to organize con-tent, and relative instead of fixed width to adapt the viewport size.

Fluid layout is the best way to be ready for any kind of screen size and / or orientation. Combined with the right media queries you can adapt to any possible device.

Essentially, it means that your grid, which was traditionally measured in pixels should now be thought of in terms of per-cent of the total page width. The actual calculated width of each column in a responsive web site changes every time the browser window changes size, and cannot be guaranteed to be the same across different devices. So, you must use a grid when designing for Responsive Web Design. It is a necessity, not a nicety. You can-not create a responsive Web site without a grid-based design; it’s simply out of the question, it wouldn’t work.

Flexible ImagesOne major problem that needs to be solved with responsive Web design is working with images. There are a number of techniques to resize images proportionately, and many are easily done. The most popular option is to use CSS’s max-width property for an easy fix.

img { max-width: 100%; }

As long as no other width-based image styles override this rule, every image will load in its original size, unless the viewing area becomes narrower than the image’s original width. The maximum width of the image is set to 100% of the screen or browser width, so when that 100% becomes narrower, so does the image. The idea behind fluid images is that you deliver images at the maximum size they will be used at. You don’t declare the height and width in your code, but instead let the browser resize the images as needed while using CSS to guide their relative size. It is a great and simple technique to resize images beautifully. Note that max-width is not supported in IE, but a good use of width: 100% would solve the problem neatly in an IE-specific style sheet.

Media Queries & ViewportsMedia queries are an excellent way to deliver different styles to dif-ferent devices, providing the best experience for each type of user.

Nirmal BalasubramanianUser eXperience Specialist, working with Sentinel UI Team since April 2013. Certified Usability Analyst having over 12+ years of experience, well-versed in all aspects of Website and web app UI design and front-end development with an emphasis on user-centered design. I love typography and photography. Strongly believe in what Steve Jobs said.. “Design is not just what it looks like and feels like. Design is how it works.”

Responsive Design

Page 16: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

12

A part of the CSS3 specification, media queries expand the role of the media attribute that controls how your styles are applied. For example, it has been common practice for years to use a separate style sheet for printing web pages by specifying media=”print”.

However, media queries take this idea to the next level by allowing designers to target styles based on a number of device properties, such as screen width, orientation, and so on.

For responsive design, we need to focus on the width condi-tions depending on the client’s current width, and load an alterna-tive style sheet or add specific styles.

Here are some methods to do the sameAssign different stylesheets depending on browser window

size.

HTML<link rel="stylesheet" media="screen and (min-device-width: 800px)" href="800.css" />

HTML<link rel='stylesheet' media='screen and (min-width: 701px) and (max-width: 900px)' href='css/medium.css' />

Using media queries within a single stylesheet.CSS@media only screen and (min-device-width :320px) and (max-device-width : 480px) {…}@media only screen and (min-device-width :768px) and (max-device-width : 1024px){…}@media only screen and (min-width : 1224px) {..}

View ports and breakpointsCommon resolutions can be sorted in 6 major breakpoints, you can work with them this way.

Major:y Target the first generation smartphones in portrait mode with

a <480px rule.y Use a <768px condition, for high end smartphone and portrait

iPads.Everything bigger (big tabs and desktop) goes in a >768px trig-gered stylesheet.

Nice to have:y Add a <320px stylesheet for low resolution.y Trigger tables, landscape iPad and big tabs precisely with a

>768px and <1024px rule.y Use a wide design for desktop with a >1024px stylesheet.

Responsive FrameworksA framework is defined as a package made up of a structure of files and folders of standardized code (HTML, CSS, JS documents etc.), which can be used as a basis to start building a site to support the development of a Web site.

There are plenty of responsive frameworks that come fully packed with everything you need for your next Responsive Design project. Bootstrap and Foundation are the leaders.

BootstrapBootstrap, has to be the most widely used framework. It is built with the most comprehensive list of features and can be quickly customized for each individual project. This includes sleek, intu-itive, and powerful front-end framework for faster and easier web development. Bootstrap utilizes LESS CSS, is compiled via Node, and is managed through GitHub to you to create interest-ing items on the web.

FoundationAn advanced responsive front-end framework. Foundation is built with Sass, a powerful CSS preprocessor, which allows us to much more quickly develop Foundation itself — and gives you new tools to quickly customize and build on top of Foundation.

Evernotey Are you losing track of your notes? y Do you find it difficult to access all your notes from different devices? y Are you worried about regular backup of your notes? Evernote can help you solve all these problems. It is available in Mobile OSs like Android, IOS, Windows phone and also available on desktop OSs like Windows and MAC OS X. With Evernote you can take notes, organize them the way you want and access them across all your devices. It is possible to create notebooks with a collection of notes. The notebooks themselves can be nested into larger notebooks. It is also possible to save favorite web pages using a web clipper plug in in the browsers. Searching for any specific note across the whole collection of notebooks is much easier. You can also add tags to specific notes to make searching customized. Using different fonts of your liking, adding tables and such basic word processing operations are also possible. You can even record voice and also add any external file attachments. Sharing any of your notes with others and collaborating is much easier. Evernote “Free Account” allows for a free 60 MB upload every month. Though “premium account” is available for a nominal fee with more storage and more features, the free account should suffice the needs of most of us. It will be a one-stop shop for dealing with all your notes.

Page 17: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

13

Enterprise applications have some level of report-ing built into them in order to address the compli-ance needs. A simple reporting infrastructure takes

into account the current state of the system while generating reports. Typically for such reports, the reporting component directly interacts with the underlying database to generate a report. This may sufficiently depict the system’s current state, but gives no indication of any evolution that may be happening in the application. As a result these reports are not intelligent and cannot be subject to business analysis, which is a critical need for enterprises.

This deficiency can be overcome by using historical data along with the current system data to generate reports. The reports can be more informative with the possibility of adding trends, and so forth. Also, the reports may not only necessarily be generated for the current time period, but for any duration interval, which can be subjected to further analysis on need. However, the problem lies in accessing the historical information. How to retrieve his-torical information? Should we rely on an auditing system and tap into the audit database to get this? The answer to this is possibly yes, but just audit data may not be sufficient, as it may not contain all the information needed.

Historical DatabaseIn order to do historical data reporting, we need to store all the changes that happen in the system over time, and thus the need for a historical database. The question that arises is what design principles should govern this database; should this be same as or similar to the Enterprise database? Why and how should it be any different? Immutability should be one property that

forms the basis of designing a database meant to store histori-cal records.

In a typical application database, any change to the data, updates the record in the database. Consider the fol-lowing example of a database table that stores an employee details.

Typical application database

empid first_name last_name Department1000 John Smith OPS

empid Department1000 OPS

Now, if the employee is transferred to a different department, the is updated table information will be as given below:

empid Department1000 IAM

In this case, when John Smith is transferred from OPS to IAM, his department information is updated, but details about John Smith’s previous department does not exist.

Immutable Database for Historical ReportingThe thumb rule is that data is immutable. Once added to the database, it cannot be modified. This means that there will

Rajiv KumarRajiv is a Specialist working in IDM team.

Historical Data Reporting and Datomic

Page 18: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

14

be multiple data records that will exist for the same user. So, we need to have a way to differentiate the records. One way of doing this is to add time information into the database, so that every change can be tracked in a timed manner. So, with this inclusion following is the updated database table information

empid first_name last_name !me1000 John Smith 2014-01-01 20:19:43.888

empid Departmen! me1000 OPS 2014-01-01 20:19:43.8881000 IAM 2014-03-05 20:19:43.888

Notice that additional column to store the timestamp for the update. The table data now contains information about John Smith’s timeline that he was in OPS between 2014-01-01 and 2014-03-05 and after that in IAM department.

So, by using immutable data tables, historical data can be persisted and used to provide intelligent reporting solution. However, this also means that now, you need to manage the time variance in your database. This does add an additional task of managing this time dependency especially around queries, where this needs to be handled. The above is a very simplistic view of the data and implementation API’s need to take care of persisting/providing the adequate timed data. In a typical imple-mentation, you may need additional views to project the various metric data such as recent changes or changes in a time window, and so forth.

There are added development/maintenance needs when it comes to creating a database to store historical data compared to current state reporting, but the business benefits achieved by using this greatly surpasses the complication. There is however a data-base that greatly eases out this effort and can make historical data storage/reporting a much simplified affair.

DatomicDatomic(http://www.datomic.com) is a database of flex-

ible, time-based facts, supporting queries and joins, with elas-tic scalability, and ACID transactions. The datomic database stores a collection of immutable facts or datoms, as it is com-monly called. Every datom has time build into it. Newer facts

supersede older facts. A major distinguisher in datomic is that the read and write features are handled differently. Query/reads are directed to the storage service which provide a read-only view of the database. Any write operation is performed through the transactor, which implies that the transactor never waits on a read operation.

Datomic ships a Peer library, which includes API’s for read/writes. The Peer’s perform extensive caching of datoms and any query is first checked against the cache. It is then sent to the stor-age service, only if it cannot be serviced from the cache. Upon any updates in the database, the transactor notifies the active peer’s with the new facts so that they can add them to their caches. One of the unique properties of datomic is that, it not only stores the datom and its time information, but also stores the transaction for which the updates occured. Datomic transactions use data structures rather than strings as in SQL. Transactions are first class objects in datomic, which can be queried like the datoms’s. Datomic queries are based on datalog, which is a deductive query system.

Datomic has an option of in-memory or persisted data-base. The data access operations are via objects which makes

Fig 1 – John Smith’s !tle is that of an Analyst

Jaimon Jose
Jaimon Jose
Page 19: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

15

development/debugging easy. The in-memory database is very useful for testing/debugging purposes. A closure based shell (part of the distribution) can be used to test commands.

The above example shows common commands to interact with the datomic database. Detailed description about datomic commands and its usage is beyond the scope of this article and will not be discussed here.

The fact that datomic transactions can be queried like data-base entities provides a much needed flexibility such that any changes that happen in the database between two time intervals can be discovered easily. The database can be queried for an enti-ty’s state at any point of time to construct a time series.

Datomic for NetIQ Identity Manager’s Historical Data Reporting

NetIQ Identity Manager (IDM) added reporting capabilities in the 4.0 release, and the advanced version of IDM has the ability to provide historical data reporting. Currently, an IDM driver (DCS) taps into the IDM event system for any changes and sends the changes to the data collection service, which ver-sions (timed) the data and stores in a Postgres database. Audit

events also feed into the database for more details. Multiple views are created in the database to facilitate the generation of these reports. The generation engine uses complicated queries to retrieve the timed facts to generate the reports from the col-lected entities.

The use of datomic as a fact store for changes in the Identity Vault (IDV), simplifies the overall reporting architecture in IDM. A Datomic driver is created to update any changes into the IDV to the datomic database. The datomic driver uses the entity id as an association attribute to bind the datomic entity with the IDM object. This ensures that the changes for an object in the IDV propagate to the appropriate database entity in atomic. Since, the database itself takes care of stor-ing multiple datoms depicting the various values of an entity at a time instant, there is no explicit handling required in the data collection service. For most practical purposes of storing the IDV event data, the data collection service is no longer needed. The queries/views can also be simplified as there is no need to perform timed computations since the database per-forms these tasks.

To understand this let us look at a user John Smith that exists in the Identity Vault. John Smith is designated as an analyst since he joined the organization (refer Fig 1). Now, John Smith has been promoted as a manager and the same is now updated in the Identity Vault (refer Fig 2).

Now, when this change of title took place in the Identity Vault, the datomic driver which is running in the IDM updates this change into datomic, which is being used as a historical fact store.

So, now if we query the datomic database for the title of John Smith, the database will return the appropriate title values, based on the query timestamp. As in Fig 3, when queried at a timestamp 2014-03-11 05:45, which is a time before the update happened in the Identity Vault (and subsequently in the db), the title is reported as “Analyst”. However, when queried at a timestamp 2014-03-11 05:50 (refer Fig 4), which is a time after the title update happened in the Identity Vault (and subsequently in the db), the title is reported as “Manager”.

As can be seen above, any change in the entities can be que-ried based on the time. A time interval or a start time or a time instant can be used to get a database entity detail. This ability to construct a time-series from the database content without writ-ing any infrastructure pieces to handle it is quite powerful. This makes datomic a good option for application that has the need to perform historical reporting.

Fig 2 – John Smith’s !tle changes to Manager from Analyst

Fig 4 – John Smith’s !tle is shown as that of Manager at !me 2014-03-11 05:50 (a"erupdate !me)

Fig 3 – John Smith’s !tle is shown as that of an analyst at !me 2014-03-11 05:45 (beforeupdate !me)

Jaimon Jose
Jaimon Jose
Page 20: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

16

Most mobile platforms require the Identity Certificate on the device to identify itself with a Mobile Device Management (MDM) Server. With the prolifera-

tion of mobile devices and BYOD scenario becoming a common-place, a scalable automatic issuing of such certificates to every mobile device became a challenge.

This was where the popular and well tested Simple Certificate Enrolment Protocol (SCEP) got leveraged. Most mobile plat-forms include an SCEP client. SCEP was originally designed to allow network administrators to easily enrol network devices for certificates in a scalable manner.

An administrator contacts the SCEP server for a one-time SCEP secret. Upon receiving the SCEP secret, the administra-tor configures the network device like routers to issue SCEP certificate request to the SCEP sever using the SCEP secret. The SCEP server authenticates the requester using the SCEP secret and then interacts with the Certification Authority (CA) server to issue the certificate based on the details provided in the certificate request.

The major drawback in this work flow is that the content in the certificate request is not validated. Thus anybody who knows the SCEP secret can send a certificate request with the subject name as an elevated service or a user like Administrator.

This is not an issue in the closed environment like network devices within a corporate setup because the actors, namely Administrator and Network devices are trusted and control-led, but the usage of this protocol for MDM exposes a security loophole. A rogue user could use valid corporate user credentials

to acquire the SCEP secret, and then can request for an elevated user’s identity certificate.

Therefore, the preferred way to use SCEP in the MDM world is to have the MDM server act as a SCEP proxy so that it can vali-date the SCEP request to make sure the certificate requested is for the intended user.

Workflowy The user on the mobile device opens up the MDM enrolment

HTTPS webpage, accepts the MDM server certificate and enters user credentials.

y On successful authentication, the MDM server responds with a randomly generated SCEP secret for the exclusive use of this user on this particular device.

y The mobile device then sends a SCEP certificate request with the user and device details along with the previously received SCEP secret. The MDM server checks the request and validates if the SCEP secret belongs to the user and device mentioned in the request

y On successful validation, forwards the SCEP certificate request to the SCEP server which then interacts with the Certification Authority (CA) server to issue the certificate.

Ashok Kumar Architect for the Novell ZENworks Mobile Management product, and leading the efforts in bringing out the first release of this ZENworks integrated product from ground up

Short Article

0'0�&HUWL¿FDWH�0DQDJHPHQW�±�8VLQJ�6&(3�WKH�ULJKW�ZD\

Page 21: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

17

IntroductionThis article explains the importance of multi-factor authentica-tion. This provides key usability aspects to consider while design-ing biometric system for multi-factor authentication and gives an overview of how a Biometric Framework is implemented in Windows 7 (or higher versions). You can also see how an applica-tion can leverage Biometrics Services Framework to identify users by using a Finger Print Scanner.

We live in a world where user security is a concern. In this age, the ability to identify users accurately is very important. Multi-factor Authentication is one of the critical elements in establish-ing such an identity and Biometric Authentication is one of the approaches to support multi factor authentication for identifying interactive users.

Here is a note from NIST on the importance of identity man-agement in the area of governance and security.

“Government and Industry have a common challenge in today’s global society to provide robust identity management tools and identify governance principles and how to deploy these tools intelligently to meet national and international standards - NIST Council Subcommittee on Biometrics and Identity Management.”

A Multi-factor authentication typically is based on identifying a user based on following criteria (minimum 2 factors) –

y What do you have? – E.g. Phone, Authentication Device.y What do you know? – PIN, Password.y Who are you? - Biometric feature E.g. Fingerprint, Voice, Iris

scan.The use of biometrics to confirm personal identity is a key

component of this identity management puzzle. Since a user brings in qualities, attributes and knowledge which are important in the design of biometric systems, usability becomes an impor-tant factor to make these systems successful.

Usability Design ConsiderationsThe Key aspects to consider while designing biometric systems are -y Anthropometrics – Metrics which provide data on physi-

cal dimensions of larger set of population. E.g: How does an individual’s height affect the quality of sample? How does age change impact the quality of sample?

y Affordance – Properties of a system that allows user to perform interaction with that system. E.g: Does the user understand where to keep the finger while recording a finger print sample? Or Does the user understand that a sample has been success-fully taken?

y Accessibility – Refers to ability of all types of users to access and use a biometric system. E.g. Is it possible for a visually impaired user to use finger print reader using cues?

Since these aspects are user centric, here is a revisit to the defi-nition of usability –

Source: ISO 1347:1999“Usability - The extent to which Products can be used by speci-

fied users to achieve specified goals with effectiveness, efficiency and satisfaction is a specified context of use”

While designing a biometric system it is important to consider these goals mentioned in the Usability statement -

y Effectiveness – Indicates the ability of a user to provide a suc-cessful biometric sample.

y Efficiency – Indicates the most efficient way to provide a sample with least error rate.

y Satisfaction – The metric about how a user feels after using the system.

y Repeatability – The metric which tells if a user can easily repeat the usage, with minimal learning.

Sachin KeskarSachin is Engineering Graduate from University of Pune and Masters from BITS Pilani. He works as Specialist with DCM group. His interests include Bioinformatics and Security.

Multi-Factor User Authentication E\�XVLQJ�%LRPHWULF�,GHQWLÀFDWLRQ�– Windows Biometric Framework

(WBF) Case Study

Page 22: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

18

A system that scores high on these goals would be the one that both Solution Designers and Users will look up to and which has a chance of achieving success.

Windows Biometric Framework Case StudyWindows 7 contains a Windows Biometric Subsystem, integrated within Windows Identity and Security Subsystem. This section describes the Windows Biometric Framework as implemented in Windows 7 and higher versions.

Windows 7 or other higher versions add support for Biometric Services through Windows Biometric Framework (WBF). The WBF API enables Windows Application to interact with native Biometric services to perform the following tasks.

y Enroll, Identify and Verify users using Biometric samples.y Locate Biometric devices and query its capabilities.y Manage Biometric sessions and monitor biometric events.y Store Biometric data and credential mappings securely.

This diagram shows the high level architecture of the Windows Biometric Framework and its key components. (Diagram Source MSDN)

The WBF contains the following key components –

y Windows Biometric Service which is a privileged service which manages all biometric devices by using the Windows Biometric Driver Interface.

y Windows Biometric Service Provider which are vendor spe-cific services using biometric systems like Finger Print Reader using a specific Biometric Factor (E.g. FingerPrint Biometric Method) and a user defined Biometric Sub Factor (E.g. A Thumb or Index Finger used for FingerPrint)

y Sensor Pool is a collection of Biometric units exposed by the Biometric Provider under a common management policy. There are 2 pools:

y System Sensor Pool – Used by System services to validate Windows principles. (E.g. User Authentication used by LSA to identify logged in users).

y Private Sensor Pool – Used by Applications and allows proprietary authentication methods. (E.g. Employee management application, without Windows Security Credentials)

y Biometric Unit which works in the context of a Biometric Provider is a logical unit of components which perform storage, retrieval and processing of biometric samples.Windows 7 and other higher versions allow applications to add

customizations to the biometric process (by implementing custom sensor pools) or use Biometric Framework as part of standard Windows Authentication system.

This section describes how biometric system is integrated into Windows Authentication System through a System Sensor pool and mechanism for client applications to use it.

Windows Authentication system contains following core com-ponents to support interactive user authentication.y Windows Logon Manager (WinLogon.exe) process – which

runs in each session instead of session 0.y LogonUI Process – which provides consistent UI for users to

log on to Windows.y Windows Credential Providers (replaces GINA starting

Windows 7) - which supports multi factor authentication. These credential providers are used by LogonUI to get the Authentication Credentials which can be verified by WinLogon using LSA.If Biometric Authentication (E.g Finger Print Scanning)

is available on a system, the associated credential provider is registered under HKLM\ Software\ Microsoft\ Windows\ CurrentVersion\ Authentication\ Credential Provider and is used by LoginUI to store and retrieve credentials. Finger Print Credential provider uses its own mechanism to interact with the sensor device to read finger prints, compare it and return the credentials for authentication. Finger Print Vendors and OEMs can implement and integrate Finger Print Products into the sys-tem or make it available externally.

Accessing WBF for User Identification use case:An application interacts with the Finger Print Credential Provider using Windows Biometric Framework. Here is an output of a sam-ple program to take a biometric sample to identify a registered Windows user, and once the user provides the sample, it identifies the user using and prints the user name.

Note: The code has not been included due to space constrains and will be available on request.

ConclusionIn this article, you can see the usability criteria of designing a bio-metric system, architecture of WBF which provides a Biometric Framework for Biometric System Providers and how it integrates with the Windows Interactive Authentication process. Output of a sample program which uses the WBF to identify Windows users using Finger Print Authentication using a System Pool has also been provided.

Page 23: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

19

IntroductionI will start with a question. Usually the data on servers gets backed up to tapes which will be stored in a secure place. How do you logically delete these backups without touching them? Think for 2 minutes before going ahead and reading the rest of the article. Here is a coded answer (to decode: replace with next character a->b, b->c ....): Dmbqxos adenqd. Cdkdsd jdx. Well this is the first patent I studied. A nice, simple but very utile innovation, isn’t it?

Innovations in the beginning are abstract. In Shreodinger’s “Image of Matter” we find these lines: “....the physicist lives in two worlds: He manipulates such tangible objects as coils and vaccum pumps, but his ideas must be appropriate to atomic dimensions. He must enter a world in which mutually contradictory hypotheses much supported by incontravertible evidence must both be accepted ...” In a way, being continuously inventive is just about living in abstract uncommon thoughts but verifying them with the litmus test of utility.

As an example, suppose we say trees sneezing causes wind and support the statement by taking analogy from wind caused by sneezing, how will it be looked like? But when we say earth’s attraction force causes fall since attraction force (magnetism as a better example) causes fall, we accept it now since it is verified. But to him who discovered, the struggle must have been there to prove. I would like to quote what Einstein has said: “Common Sense is actually nothing more than a deposit of prejudices laid down in the mind prior to the age of eighteen”

Practical aspectsLet me touch some practical aspects of the innovation in the con-text of the litmus test mentioned: Whatever may be the source of inspiration, we need to spend time to think. However, in a corpo-rate setting, where do we get so much of time? Well there can be many reverse answers like questioning back how do we get time for so and so things like watching a cinema. But the problem is a real felt at least. There is no substitute to spending time and here is where the real innovation starts I guess. An idea can be as simple

as taking a leave and spending your time at office(nobody can assign tasks)/home for the entire day towards innovation.

Broadness of the idea is another aspect I would like to touch. When the idea is broad enough the level of acceptance is more. A better acid test is, can you start a company of your own with this idea. We need to aim for such broader ideas even to result for apparently small innovations. An essential ingredient for broader ideas is security. There may be many working in software develop-ment but not all are well versed with Network security aspects. I would recommend to atleast implement SSL client and server and subscribe to openssh mailing list for some time atleast to fill the requirement. “Cryptography & Network Security” by William Stallings is a nice book which I would recommend.

The mechanism of the idea actually need not be worked out to a granular detail. The thumb rule is when it can be explained to a person in the same field, just so much of detail is enough. When we find an idea with sufficient detail, even without an implemen-tation, we can create a White paper, product presentation, or file a patent. All these do not require you to implement your idea, though it may help.

Problem sourcesBut, how to get the source of the problems? The work we do is definitely a source. In my case, I have also found the text books and sites like Google scholar were helpful. Sometimes, a problem may appear to be uncrackable, but it might be fun to solve it. An example is to factor a prime number product. Though it is not solved till now in public knowledge, I took it up and it helped me understand and discover the properties of prime numbers, which resulted in two patents.

Reinventing innovationSomebody said the best way to understand a wheel is to reinvent it. I agree with it. Also in doing so, I realized many truths about numbers. They are difficult to explain, but I would still try to share

GNVS SudhakarDone MCA from NITK and been with Novell for the past 13+ years. IC (Inventions Committee) member.

Innovation Sparks - 1

Page 24: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

20

my understanding below. I feel we have not been taught the basics about the number system properly in our schools nor innovation.

Here is a link about how pi was discovered: http://betterex-plained.com/articles/prehistoric-calculus-discovering-pi/

A comment in the above site reads as “Why was this never explained like this in high school?” This is the whole point I would like to concentrate on. Mark Twain rightly said: “I’ve never let my school interfere with my education”. We know children are very innovative. Growing up should not mean decreased innova-tion. The approach to mathematics can be made fun and fulfill-ing. For example, Srinivasa Ramanujan often said, “An equation for me has no meaning, unless it represents a thought of God”. Einstein said, “But the creative principle resides in mathematics. In a certain sense, therefore, I hold true that pure thought can grasp reality, as the ancients dreamed”. We can find more inter-esting examples in vedic maths. So we need to agree, it is possible to think differently.

When you innovatively solve a problem from a source, try again in a different way. This is another way to generate new ideas. When you find that your solution to a problem is already discov-ered, you can still consider it as a sign of success and progress.

Innovation PhilosophyActually, in nature everything exists as one whole thing. Dividing itself is accepting unknown. Consider these examples: Which art-ist has drawn the colors of parrot like nature? We find Fibonacci arrangement in nature right upto arrangement in DNA. Aeroplane was designed after noticing birds. Music was inspired by cuckoos. Science + art + movement + .... exists as 1 in Nature. In nature we get everything together as one. It is we who divide it into different things depending on our limited range of perception.

We have seen different things about innovation. But why do we need to innovate in the first place? I would say it is seriously for the fun of it. No other reason really fits in there.

Inspiration for IdeasIf you think there is no inspiration for innovation, then read-

ing so far proves you are interested and you have it! Inspiration for ideas comes from mystery. We call this by different names. For example, in case of science we call it as entropy. It can range from difficulty to compress random bit stream in computer science to the holographic theory in modern physics it is the entropy which is the inspiration to many things. Strength of secret for RSA algo-rithm comes from the fact that nobody knows (publicly claimed) how to factor large numbers. Entropy here is also not knowing how to factor. Stuff of this kind generally gives inspiration.

Innovation DefinitionInnovation is uniting inside. Innovation comes from resolving conflict which can be identified through increased sensitivity to conflict. This comes from a “Not to fear” mindset. This is the most important quality needed for innovation and works at the subtlest of subtlest levels.

Areas for InnovationI have seen patents in all imaginable areas in technology. So we cannot say that an area is not suited for invention. A good example is the email. I have observed many people trying to innovate the email. As it is the most used concept, many people generally tend to start there. While I do not discourage you, it is both easy and difficult to innovate with email because by the time an idea comes,

it is already implemented since it is easy, somebody else already did it. Because of this it also becomes too difficult to innovate in email space. But, this shows any area can be taken invented when it becomes familiar.

Inventions reviewWhen it comes to presenting an invention to the reviewing body, we need to understand that the entire review discussion is not typically conveyed back to you. In case the decision is NO, it does not mean the idea was not properly understood. The rea-son conveyed back to the inventor generally does not contain all details. This is true for most of the cases. It is also the respon-sibility of the innovator to convey the idea in a simple, easy to understand manner.

Prior art research means searching about your new invention related work in Internet to see whether it already exists. Generally 20 to 40 minutes of concentrated effort is enough in case of patent idea submissions. Also when understanding a patent, reading the first two claims and the abstract gives good idea about the patent. This method of understanding works for most of the patents.

Innovation PatternsSome of the recurring innovation patterns I have seen are intel-lectual cross pollination, piercing through abstract layers, relax-ing the original purpose, and so on. We can find many examples for first one. I read somewhere that IBM brings people from diverse backgrounds like physics, biology, chemistry and more to work on a single problem. For the second one though there are many, I would give this interesting example patent filed by a fel-low Novellite from Bangalore: Raw sockets interface was used in a VPN product to provide a new feature otherwise not possible. For the third pattern we can find many examples, some are: Rsync, where integrity of data is relaxed or diluted to give faster synchro-nization, coating razor blades with plastic kind of material and decreasing sharpness to increase durability.

Effort for InnovationAnother aspect about time is, do ideas take time or do they come in a jiffy? We know the case of Archimedes and the state in which he got his famous idea. Deeper contemplation is definitely needed. The flash moment may be at a different time but the mind must be tuned and hence spending time for innovation is definitely required. Mozart says he gets ideas for music when he wants. But we know the effort required to master classical music. Similarly kekule got the benzene structure in a kind of meditative state. The effort behind it can be understood. James Rothman, who won 2013’s Nobel prize in medicine said he was “nuts” to attempt to reproduce the cell’s complexities.

SummaryIn the end, I would like to summarize as “Innovation is unit-ing inside. It starts with identified conflict. Identifying conflict requires the necessary “Not to fear” mindset. The “Not to fear” mindset comes from harmonious understanding.

EpiolgueThis is the first of the article series which I have planned about innovation. I will try to cover structured innovation and more in the next article. Please reach to me for anything in general. I will try to cover any shortcomings or requests in the next article.

Page 25: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

21

According to the recent IBM research, the 90% of the data that exists on this planet has been created in last two years. We are creating everyday around 2.5 quin-

tillion bytes of data. Data is coming from everywhere and every-thing that we do starting from making bank transaction to social networking to social media, every moment we are making this data grow bigger and bigger. To address this uncontrolled mam-moth data, the growth of NoSQL systems over the past few years has prompted the development of more and more companies working on integrating NoSQL and big data into traditional SQL-centric systems. As an outcome, the fledgling NoSQL marketplace is going through a rapid transition – from the predominantly community-driven platform development to a more mature appli-cation-driven market.

Why NoSQL can be better than SQL?For large organizations the relationships and tables in SQL data-bases can reach to millions. When millions of users try to lookups (lookup or search) in these tables, systems can suffer major per-formance issues, as Google and Amazon discovered the hard way before switching to non-relational systems. Large-scale program-ming projects using complex data types and hierarchies, such

as XML, are difficult to incorporate into SQL. These data types, which can contain objects, lists, and other data types themselves, do not map well to tables consisting of only rows and columns. NoSQL databases, in comparison scale up horizontally, adding more servers to deal with larger loads.  Auto-sharding lets NoSQL systems automatically share data across servers, without needing to perform some complex coding maneuvers. This balances the load across several servers, providing a more robust system in the event of a crash of a particular server.

NoSQL database categoriesApart from Graph databases, there are mainly two categories of NoSQL databases:y Databases that allow data to be stored as JSON documents such

as MongoDB, CouchDB and BaseX. y Key/Value pair NoSQL database such as DynamoDB, Riak,

Redis and Cassandra, which store data as key/value pairs.

Why to monitor NoSQL databases?The downside of most NoSQL databases today is that they traded ACID (atomicity, consistency, isolation, durability) compliance

Sudipta RoyWorking as Specialist in NetIQ DCM group. He is a neophyte in Novell and having over 11 years of experience in sweeping technologies of Software development and service offerings. His key interest is to be the part of software-making that is soft to equipt, easy to use, and capable of riding in the cloud to accomplish the need of mass.

The Importance of Capacity Planning and Monitoring NoSQL (Not Only SQL) Data Warehouse

DMS

Page 26: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

22

for performance and scalability. Many also lack mature manage-ment and monitoring tools.

In a cross-platform document-oriented database system like Mango DB, what has already been adopted as backend soft-ware by a number of major Web sites and services, including Craigslist and eBay, Foursquare, SourceForge and the New York Times, it is eminent to have cross-domain monitoring approach spans server, storage, network, virtualization and applications to automatically cross-correlate metrics in real-time, freeing user from the task of gathering this information from multiple sources. Since NoSQL databases allow for virtually unlimited scaling of applications, they greatly increase application infra-structure complexity. Monitoring is a critical component of database administration in this case, for diagnosing issues and planning capacity.

What to Monitor?Now some key questions that arise here, are as follows:y What are the key metrics that need to be monitored to ensure

the application is meeting its required service levels? y How to know when it is time to add shards? y How to take preventive measures when working set exceeds

available RAM and the system encounters page faults?With an appropriate monitoring capability user can essen-

tially gain in-depth visibility into the right metrics to optimize their data infrastructures with various statistical data which can help to make the proper capacity planning. Some attributes that can be monitored to find the solution for the technical challenges of NoSQL databases and its sharding mechanism are mentioned below:

High-level overview NoSQL environments scale horizontally across a multitude of dis-tributed nodes. A high-level overview of the different nodes can provide integrated view of the links between the different nodes in the replica set or sharding server. It can retrieve details on live, leaving, moving, joining and unreachable nodes.

Memory UtilizationNoSQL Database uses memory mapped files to store data. These memory mapped files make it difficult to determine if the amount

of RAM is sufficient for deploying applications. Application per-formance can go down and can even generate OOM (Out of Memory) error if RAM is not sufficient. It is essential to monitor the memory consumption of applications running on NoSql data-base environments and display the used, free and total memory of the server.

Connections StatisticsBy monitoring and tracking the number of used and available connections between the client and database the chances of appli-cation performance irregularities can be avoided in NoSql envi-ronments as sometimes the number of connections between the clients and the database can engulf the ability of the server to handle requests.

Database Operation StatisticsIt can be ensured through monitoring the database operation statistics along with replication and sharding operation details, that operations are happening in a consistent manner by moni-toring the total number of database operations (insert, getmore, delete, update and command) per second since the start of the last instance. This data can help analyzing and tracking the load on the database.

Lock Statistics Since some of the NoSql databases use locking systems, if cer-tain operations are long-running, application performance slows down as requests and operations wait for the lock. In such sce-narios, Lock Statistics can be monitored, like number of opera-tions that are queued and waiting for the read-lock or write-lock and number of active client connections to the database currently performing read/write operations.

Journaling StatisticsNoSQL databases like MongoDB uses journaling to guarantee operation durability, which means that before applying a change to the data files, MongoDB writes this operation to the journal. Journaling ensures that MongoDB is crash-proof. By Monitoring journalizing statistics it can be known that how much time taken in writing the data to disk.

Storage StatisticsIn case of significant amounts of data, disk space usage can vary over time within a NoSQL (e.g. Cassandra) environment. A moni-toring tool can monitor disk utilization and storage statistics over defined time periods to help identify and remedy performance issues.

Thread Pool StatisticsMonitoring can provide statistics on the number of tasks that are active, pending, completed and blocked. Monitoring trends on these pools for increases in the pending tasks column can help user plan add additional capacity.

Dropped Message StatisticsMonitoring can also help user to deal with overload scenarios in NoSql environment by keeping a lookout for dropped messages. User can receive a log summary of dropped messages along with the message type. In this, user can establish thresholds and config-ure alarms to notify for dropped messages.

Page 27: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

23

Just-In-Time TestingHow do you know you completed testing? How do you know that the application is ready for end users? Did we test the right things? Did we apply the best testing in the best way? How to address problems with hardening sprints and regression sprints? Just-In-Time testing approach may be an answer to few such questions.

What is Just-In-Time (JIT) Testing?Just-In-Time testing termed as “Smart Testing” - is all about “Smart Thinking”. It’s a mindset – a particular way of looking at the problem and a skill set – a particular set of things that we prac-tice and get better at, mainly focused on how to do effective testing more quickly, less expensively, with excellent proven results.

Just-In-Time Testing approaches are successfully applied to many types of software projects especially in Agile environment where we have continuous integration of new functions, features and technologies. It tells how to cope with testing systems with minimal specifications, no detailed requirements and without any upfront analysis.

When to use Just-In-Time (JIT) testing?Just-In-Time testing works well when you don’t have the benefit of detailed test analysis, test plan, test cases or test procedures. When you have very less time to test the features, before starting the exploratory testing or an automation testing.

Where do we find Just-In-Time (JIT) testing approaches useful?

It’s best suited for projects which run in Agile environments. This approach is successfully implemented in different domains like Data Center Management, Storage, Security, Banking, Insurance, so on.

Why should you go for Just-In-Time testing?

y To increase the overall confidence level of testing.y To know how to adapt to changes, prioritize tests and find criti-

cal bugs.

How to implement Just-In-Time (JIT) testing approach?In general, JIT testing includes the following workflow:

1. Discover new testing on the fly.2. Triage testing ideas.3. Elaborate your testing opportunities.4. Implement your testing.5. Track testing status.Discover: As soon as you get any information or insights about the software to be tested, you can start building a collection of testing ideas. How to find them?y Does the system do what it is supposed to do?y Does the system do things it is not supposed to?y How can the system break?y How does the system react to its environment?y What characteristics must the system have?y Why have similar systems failed?y How have previous projects failed?

The above ideas are compiled and grouped under differ-ent categories. The important test cases are called charters. The defined charters will be represented through mind map for a quick understanding.Triage: Triage the whole activity by associating the Business Risks and Technical Risks. Business Risks can be evaluated from the product management perspective. A business analyst or a release manager can scale the benefits in terms of High, Medium and Low.

Technical Risks can be evaluated from the Engineering per-spective. QA Functional Manager, Developer can rate the conse-quences in terms of Significant, Neutral and Minimal.

A priority column should be created as a conclusion to this Business and Technical Risks.

Further, associate the details of estimated time effort, actual time effort and the status of defects through triaging. Create burn down chart for the test effectiveness.

During regression and hardening sprints, test cases can be selected based on priorities defined.

Vikram KaidabettMore than 5 years of experience in Software Testing, Test Automation,Product R&D engineering. He holds M.Tech degree from VTU and works as Software Consultant in Novell. Aspires to become an automation specialist and IT infrastructure management consultant.

Smart Testing Ideas

Page 28: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

24

Elaborate: Elaborate the whole concept with different testing approaches as below:Scripted: Create test scripts if required.Exploratory: Systematic exploratory testing involves concurrent test planning, test design and test executions. An exploratory test charter is defined. The scope of testing is defined. Templates for capturing testing notes and observations are established. Implement: A typical workflow is as below:

y Confirm that the test objective and context are understood. y Test the software, wrap up, report bugs, collect notes and data.

Track Status: Summarize JIT project status with information about test ideas, test results and bug summary. Take through the excel file in the retrospective meetings.

A list of different sources of test ideas that can be drawn based on the JIT approach:

Capabilities: Make sure that applica-tion does what it is supposed to do. Requirement and Functional Specification documents,VersionOne exit criteria’s; defect tracking tool data, customer info can be used as a source of capability-based testing ideas.

Failure Modes: Concentrate on “What if ” questions that are often inspired by how a sys-tem is designed. What data can be wrong, missing or incorrectly structured? Does the system have to synchronize with other sys-tems or events? Ask yourself, what if they break or think on some

unanticipated failures. List these ideas against Failure Modes.Usage Scenarios: Ideas can be derived from end user product usage, experience gained through customer defects, identifying who is using the system, what are they trying to achieve and in what context etc. Think on whether a user can achieve their tasks with the software under test. Also, thinking on How to model operational workflow? Integrations, Interfaces drives lot of user scenario charters. Creative Ideas: Use lateral thinking techniques, imagine on what can go wrong, how different the product can be used from the customer stand point, generate fresh new ideas and alter-nate possibilities to bring effective solutions. List down all your ideas against this charter.Quality Factors: It can be understood as a characteristics of a system that must be present in it. Include the usability, availability, scalability and maintainability aspects into this charter while creating a test plan. Quality factor test ideas often involve experiments to determine if a quality factor is present. Example includes performance, load and stress testing.Environments: Explore how the application behaves in different operating environments. It can be related to different operating sys-tems, hardware, software, third-party software and so on.Taxonomies: Bug taxonomies gives a rich source of test ideas. As these are organized, documented collection of bugs - It gives a more insight into the product and the issues faced by customers.Data: Data is a rich source of testing ideas. Data flow paths can be exercised, different data sets can be used, data can be built from combinations of different data types and stored procedures can be verified. States: Use state models to identify test ideas such as getting to states, exercising state transitions and navigating paths through system.

SummaryIn JIT approach, we create a test plan for each stories which contains

y The “Charters”: Prime area of interest which has lot of test cases.y Test Plan of a story should essentially contain: y Brief description on what is been implemented. y An acceptance criterion y Epic, Project and Sprint name y Detailed charters generated from different idea sources. y Mind map diagram to demonstrate the test ideas. y Risk associating columns, time estimations and status y Priorities defined for charters

In brief, JIT approach delivers a right balance between plan-ning, documentation and test execution. It shows different smart ways to test application with minimal specifications. It’s not about urgency or speed rather it deals on thinking critically on what has to be tested and thus illuminating the necessary work to ensure that the product is tested effectively.

Significant Neutral Minimal1 2 3

High 1 P1 P2 P3Medium 2 P2 P3 P4

Low 3 P3 P4 P5

Page 29: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

25

Radha Devaraj A technical writer in the EPM documentation team, longs to be creative, positive and, passionate in whatever she undertakes - big or small. She holds an M.A in English Literature

You may think, why this reference to” emotions” in the work context? This thought is obvious because often we tend to associate emotions with our kith and kin

and more so with the home environment. In this rapidly changing world, our priorities are changing

with increased importance attached to work, roles, achievement and success. We are constantly gearing ourselves up to face the competitive world in an efficient and hazardless manner.

Reactions to stressful situations in personal life might be to shout, argue, or pity one-self but it cannot be the same at work place. You need to manage emotions well under all circumstances to be productive and professional.

Emotional intelligence (EQ) is the ability to identify, use, understand, and manage emotions in positive ways to relieve stress, communicate effectively, empathize with others, and over-come challenges. In this fast paced environment, emotions occupy a significant role than ever before because productivity at work-place demands greater understanding of the roles assigned to us and that of others, expectations, priorities and so on. Successful organizations are made of not just buildings and other material assets, but to a great extent of human resources.

Intelligence might help one get a job but emotional intelligence equips one to face the problems and challenges that are likely to arise while managing the job.

A good understanding and awareness of such workplace situ-ations should help us cultivate emotional IQ consciously because man is a social being who evolves with time.

Workplace Scenarios - Emotional Quotient An employee with high emotional intelligence manages one’s own impulses, communicates with others effectively, manages change well, and solves problems. Such employees have empa-thy and remain optimistic even in the face of adversity. The composure in stressful chaotic situations and clarity in thinking is what separates top performers from weak performers in the workplace.

Emotional intelligence increases individual occupational performance, leadership, and organizational productivity.

EQ - Individual Professional Performance An employee with good emotional intelligence (EQ) manages challenges and changes in workplace with adaptability.y Veronica an employee with good EQ is aware of the tricky situa-

tions created by Madeleine one of her coworkers who constantly tries to divide the team for her own selfish gains. A minor disa-greement crops up between Veronica and her team members regarding a work related issue. Madeleine adds much more to the issue and goads Veronica with suggestions to complain to the manager about the other team member. Veronica declines this suggestion thinking such minor frictions in the work envi-ronment must be ignored. On the contrary, imagine a situation where Veronica was influenced by Madelein’s goading to complain. The relation-ships would have been at stake for silly reasons and unnecessary effort and time (of both the manager and the team members) spent dealing with minor conflicts which otherwise get better in due course of time by better communication.

y Sally an employee accepts the suggestions made by her man-ager during 1-1 meeting for further improvement of certain skills and decides to implement them. This is mainly because of good self-awareness and self-assessment (one’s strengths and limits). People with good EQ have this flexibility to be receptive. On the contrary, imagine a situation where an employee who lacks conscientiousness and has a bloated perception of one’s capabilities gets into an argument with the manager and refuses

Short Article

$�5ROH�RI�(PRWLRQDO�,QWHOOLJHQFH� 4XRWLHQW��(4��LQ�:RUNSODFH�

3URGXFWLYLW\

“You cannot control what happens to you, but you can control your attitude toward what happens to you, and in that, you will be mastering change rather than allowing it to master you.” — Brian Tracy

Page 30: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

26

to consider suggestions for improvement. Such people who lack EQ and rigid are a threat to the progress and productivity in workplace.

y Rebecca is an optimistic employee in ABC organization and manages to hold on to the current organization despite of the demotivating strategies of her team members such as false accu-sations, discrediting behavior, etc. This is because of good self-management which includes: achievement drive, commitment, initiative, self-control and adaptability. There is passion to work for reasons that go beyond money and status. On the contrary, if employees in identical situations were to impulsively take decisions to quit the organization or bother the leaders in the hierarchy with frequent escalations, productivity would be at stake.

y Peter is an employee who is well aware of the hurdles that are placed by co-workers in the new project and it is affecting to make progress. Owing to his self-confidence and assertive skills, he is able to clear the misconceptions woven around him and thus continue to be a productive employee. With his EQ, he is able to overcome the vicious problems created by some of his jealous coworkers and thus marching ahead productively. On the contrary, imagine what would happen if Peter were to succumb and thus get into depression.

y Roger is an employee with good EQ and remains calm even under pressure both in personal and professional situations. He has challenges to meet both in the home front and at workplace. He never lets his personal problems interfere with the quality of his work. With focus and good control of emo-tions, he has always proved efficient and accountable at his workplace. On the contrary, imagine an employee with poor EQ giving personal problems as excuses for not delivering well at workplace and also not taking measures to organize time and effort.

EQ - Leadership Productivity Managers and Team Leaders who shoulder major responsibility of handling varied projects need to have good emotional intel-ligence to be able to get the work done, to keep up the deadlines intact and thus drive growth by making the team work in fine co-ordination.y Daniel, a manager understands the blame game antics in the team.

When one of the team members deliberately blames another team member of poor performance and bad behavior. Having his own yardsticks of measuring each one’s actual performance, he does not entertain the complaining member. He knows very well that such tendency does not create a healthy atmosphere in workplace and blaming is a way of devaluing others. Blame game takes the enthusiasm away from the project in the team. He has the required emotional intelligence to understand the possible workplace negativity that lurks in and the ways of purging it. On the contrary, imagine a situation in which a manager or team leads were to entertain such blame game tactics of team members without any discretion. The accused team member might be hurt, feel insecure and decide to quit the job or con-tribute less deliberately.

y Delia, a team leader has a good EQ to identify loss of group motivation because of social loafing lurking in her team since the last project. John, the team member has started taking less

responsibility for certain tasks assuming that one of the other group members will take care of it. Few other members in the team have started finding team work frustrating mainly because of carrying the weight of the work during the previous project. Delia has the EQ to realize that all this is due to the sucker effect in which people feel that others in the team will leave them to do all the work while they take the credit. With this understanding, Delia assigns specific tasks for members in the team and recommends creating a system for measuring indi-vidual performance and rewarding those who excelled above and beyond the team goal. On the contrary, imagine a situation in which a team leader with poor EQ randomly assigns tasks in a team with no proper tracking and awarding system in place. Some of the team members might withdraw from contributing well in a team for the fear of carrying an unfair share of the workload, while few others might free-ride assuming that other team mem-bers will anyway complete the task. This makes the teams less productive.

y David a manager has a good EQ to be able to concentrate on people development in his teams. He is able to sense what his teams need in order to grow, develop, and master their strengths.

y Harris a manager with sound EQ is able to identify some of the workplace negativity factors such as jealousy, work-place mobbing. Some of the co-workers gang-up to force someone out of workplace through humiliation, discredit-ing and isolation. Harris knows the motives for co-worker backstabbing such as disregarding others’ rights in favor of one’s own gain, self-image management, revenge, jealousy and personal reasons. As a result of good EQ, Harris is able to see through the ill motifs of some team members and ensure fair treatment. Thus his EQ is conducive to produc-tivity at workplace. Through fair measures, Harris is able to prevent employee dissatisfaction and withdrawal, thus preventing a big loss to the organization in terms of human resource and output.

EQ - Organizational Productivity Real business is based on relationships. A good team, an able leader, managers and others in the hierarchy with good EQ can make a lot of difference in terms of customer service orientation. Good EQ helps each of them to anticipate, recognize, and meet needs of customers.

People are willing to do business with those they know, like and trust. Leaders, team members and others in different roles with sound EQ can increase the overall productivity of an organi-zation.

Concluding MessageBe emotionally intelligent and self-aware. Growth is an all-round phenomenon. Remember, when others grow with your help, you grow too. When you let others do their job well, you are doing your job well. The moment you start thinking of others’ growth as an impediment to your success, you stop growing because from then onwards you start wasting your effort and time on finding ways of snubbing the other person. In the process, even your psychological health suffers because negative feelings such as jealousy, frustration, fear, anxiety often produce some unwanted chemical reactions that are harmful to one’s health and well being.

Page 31: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

27

IntroductionA long waiting network call or a CPU intensive computation blocks the current thread execution. The asynchronous program-ming model delegates these computations to other threads or to operating system, and thus makes full use of the system resources to schedule other computations. Even creating additional threads may degrade performance as threads are limited resources. Event based reactive programming model executes codes only when a useful event is triggered, such as network call returns some data or a user clicks a button. Event based programming model also introduces nested callback hell. RxJava Observables provide a way to escape from this hell and write declarative code to deal with asynchronous results.

In this article, we will look into briefly about primitives to make long computations asynchronous and advanced power-ful reactive programming concepts of combining asynchronous computations and scheduling them by using RxJava’s Observable sequences.

Groovy ClosureThe code examples in this article makes use of the Groovy lan-guage mostly for passing block of code, called Closures, to Java methods. Groovy is a dynamic language. When Groovy code is compiled to Java byte code, it can be run on JVM as normal Java code.

Closures are important concepts in programming styles closest to functional programming. It has immense applica-tions in places such as passing callbacks in GUI programming. Closures are computations enclosed with the data it operates on. Groovy Closures particularly are objects deriving out of Runnable and Callable interfaces. So, Groovy Closures can be passed to places where a Runnable or Callable instances are expected.

Code 1: Creating threadsThread.start {println("hello")}

All examples in this article can also be be written with plain Java. RxJava uses Action and Function class variants to rep-resent closures. Refer to references [1] for closure and [2] for RxJava API.

Java.util.concurrent utilitiesWe will revisit some important utility classes in java.util.concur-rent package to make computations run in another thread.

To create a computation to run in a separate thread, we will be using “Thread” objects. Code listing 1 shows how to create a simple thread and run it immediately.

Huge number of threads, more than your system can han-dle, also degrades your application performance. To avoid that, thread pools should be implemented. Thread pools create a fixed number of “worker” threads. The “work” will be distributed to these worker threads. When there are more requests than work-ers, they are added to a queue and will be processed later. In this way, your application performance is maximized according to the capability of the system rather than the function of incoming requests. Thread pools in Java concurrency package are imple-mented by using the Executor interface. For example, the above code can be executed under a thread pool as shown in Code listing 2.

Code 2: Thread Pool ExecutorExecutors.newFixedThreadPool(5).execute {println("hello")}

To abstract a task that has some computation and returns nothing, the “Runnable” interface is used in Java. In all the

Sureshkumar ThangavelA tech savy passionate programmer & architect. His interests include security, functional programming and massive parallel scalable systems. He likes natural food and likes to do organic farming. He works for Access Manager at Novell.

Asynchronous Reactive JVM Programming

Page 32: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

28

examples above, we used the Runnable interfaces in the form of Closures. The concurrent package also provides another inter-face “Callable”. This interface represents computations that also return a value as shown in code listing 3. This code runs the Closure in a new thread and waits for result. The new thread sleeps for 1 second and returns “hello”.

Code 3: Asynchronous computation returning result

def v = {sleep(1000);return "hello"}.call()println v

FutureIn the callable example above, we have seen how to run a com-putation in another thread and access the result of the compu-tation in the calling thread. But, the computation does not get executed unless we call the “call” method on the Callable. When called, the current calling thread waits for the result. This is not asynchronous.

To model a computation to run in a new thread without block-ing the current thread and access the result later, “Futures” in Java help. Futures are useful when a caller is making a time consuming network call or computation. Instead of waiting for the computa-tion result, Futures can do other things and access the result when it is ready to do so.

For example, the following example shows how to make a net-work call involving latency, for example, searching for an object in a LDAP directory and process the result when the results are ready.

Code 4: Asynchronous Ldap Search

Executor executor = Executors.newFixedThreadPool(5)Future<String> future = executor.submit({ println("running...") LdapContext ctx = new InitialLdapContext(get_env("ldap://164.99.86.49:389", "cn=admin,o=novell", "novell")) SearchControls sc = new SearchControls() sc.setSearchScope(SearchControls.SUBTREE_SCOPE) def sr = ctx.search("o=novell", "([email protected])", sc) if (sr.hasMore()) { SearchResult result = sr.next() println("result is ready") return result.attributes.get("cn") } return "unavailable";} as Callable)// do something else. when ready to access results of cnprintln("doing something else")sleep (1000)println ("ready to call get...")println future.get()

The Excutor.submit method submits a Callable to be executed asynchronously and returns an object of “Future”.

Future is asynchronous until you call the “get” method on it. When “get” is called, the current thread gets the result immedi-ately if the Future is completed already. If not yet completed, the “get” call blocks. This is useful when you have finished with some other tasks and you cannot proceed further without the result of the asynchronous computation.

Futures can also be canceled. For example, if you are using a Future to populate a list box, but before the future is completed, the user navigates to another screen destroying the list box. In this case, you can cancel the Future. Future will be canceled if it is not completed already.

Futures are good for single level asynchronous operations. It is easy to make an asynchronous call and do something else and wait for the result. When it is necessary to wait to get a result of a “get” call, it becomes synchronous again. If you want to do multiple simultaneous asynchronous calls, it becomes difficult or impossible to make subsequent calls

asynchronous. In the following example, it creates two asyn-chronous tasks:

Code 5: Sequencing Futures

def printtime(x) { println("time: " + new Date() + " value: " + x) }Future<Integer> participant1 = executor.submit({sleep 3000; return 100} as Callable)Future<Integer> participant2 = executor.submit({sleep 2000; return 200} as Callable)printtime("main checkpoint 1")printtime participant1.get()printtime participant2.get()

Even though, the result of 2nd task is available within 2 sec-onds, we cannot do something on the result until 1st task is com-pleted which takes 3 seconds. This is because we assumed that 1st task will complete soon, so we called participant1.get on it which makes the program synchronous at this point. To avoid this, we have to poll all asynchronous tasks one by one and decide what to do when it is done as shown below:

Code 6: Polling Future completions and getting results

def printtime(x) { println("time: " + new Date() + " value: " + x) }def participants = [executor.submit({sleep 3000; return 100} as Callable<Integer>) , executor.submit({sleep 2000; return 200} as Callable<Integer>) ];while (! participants.isEmpty()) { def itr = participants.iterator(); while (itr.hasNext()) { def p = itr.next(); if (p.isDone()) { printtime p.get(); // do something useful on result of p itr.remove() } }}

To avoid such complexities and be able to operate on the results whenever they are available, a new reactive model of pro-gramming is recommended. Reactive programming with RxJava provides “Observables” to solve this. This is covered in the next section.

RxJava ObservablesRxJava is a library, which provides reactive programming pat-terns and methods to compose asynchronous and event based computations. RxJava introduces two constructs: Observable and Observer. Together they provide ability to operate on dis-crete events asynchronously and compose those asynchronous computations together. To understand clearly about Observables, let us compare the synchronous and asynchronous computations involving single result and multiple results.

Single Result Multiple Results

synchronous T Iterable<T>

asynchronous Future<T> Observable<T>

The synchronous computations either return a value or a sequence of values of type “T”. The sequence is represented with the “Iterable” interface. This operates on a pull model, where the caller pulls the result of the computation when required. This pull-ing of the result is a blocking operation making the current thread wait for the result.

Whereas the asynchronous computations return immedi-ately without waiting for the results. They return references to the computations that can be checked for the result later. When it involves a single result, they return “Future<T>”. Whereas for multiple results, they return the “Observable<T>” object. The result of a single result need to be pulled out of Futures. Whereas for Observables, an object called “Observer” needs to subscribe to them and the result will be “pushed” to the observers whenever it is available. This makes the Observable reactive and intimate the observer as soon as the result is available.

Jaimon Jose
Jaimon Jose
Jaimon Jose
Page 33: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

29

Observable is an extension to the “Observer” pattern. It also provides mechanisms to report errors and notify comple-tion of data. So, it is suitable for asynchronous event based streams.

The two main concepts in RxJava are: “Observables” and “Observers”. Observables pushes data. Observers subscribe to the Observables. Observer registers three callbacks with Observable, namely onNext to receive data, onCompleted to get notified on completion of event streams, and onError to get notified in case of errors. When an error is encountered, the Observable stops send-ing the data to the Observer. The subscribe method on Observable returns a “subscription” object, using which the Observer can stop receiving the data before it completes. This relationship is described in following class diagram:

ObservableObservables are created by using one of the many factory methods available in the Observable class. Observable.from family of meth-ods converts any synchronous objects into asynchronous events. For example, Observable.from(Iterable) converts the sequence of data into Observables, which can be subscribed. The Observable.create factory method creates a new Observable based on the Closure that is passed to it. This Closure is executed synchro-nously when subscribed. For example,

Code 7: Creating Observable from array

Observable .from([5,6,7]) .subscribe {x -> println(x)}

creates an observable from an array, subscribes to it, and prints.

The creation function can be used to do blocking computation in the Observable which,blocks the calling thread. For example, the following examples use blocking network calls:

Code 8: Creating observable from Future

Executor executor = Executors.newFixedThreadPool(5)def ldapConnect(env) { Observable .from(executor .submit {new InitialLdapContext(env, null) } );}

With Observerable.create, you can create custom code to push arbitrary values asynchronously to the observer through observer.onNext. Observable’s create function receives an onSubscribeFunc object, which holds reference to the observer. This code inside the OnSubscribeFunc object is called every time a subscriber sub-scribes to it.

Code 8a: Observable to return ldap search results

Observable.create { observer -> SearchResult answers = ldap.search(base, attr, sc) while (answers.hasMore()) { observer.onNext(answers.next()) } answers.close() observer.onCompleted()

Subscriptions.empty()}

Only creating these Observable objects does not execute them until a subscriber subscribes to it. The code above with the loop is not called until some observer subscribes to it. Such Observables are called “cold” observables. They don’t start emitting data until being subscribed. When it is subscribed, it becomes “hot” Observable and starts asynchronous computation.

SubscriptionsSubscription objects holds reference to asynchronous computa-tions. To cancel an asynchronous computation, the subscriptions referencing it has to be unsubscribed. Unsubscribing discon-nects the Observer from the Observable. The subscriptions can also be used to run clean up code when unsubscribing happens or to terminate the asynchronous computation. For example, in the code listing 6, when an Observer unsubscribes, it wastes CPU cycles to continue the inner loop iterating all SearchResults. So, it is required to signal the loop to terminate prematurely before computing. Using the Subsciption.isUnsubscribed() method, this decision can be taken. The modified example looks like the fol-lowing example:

Code 9: Ldap Search Results that can be terminated

def observableSearch (LdapContext ldap, String base, String attr, SearchControl sc) { Observable.create { observer -> Subscription sub1 = new BooleanSubscription() def answers = ldap.search(base, attr, sc) while (answers.hasMore() && !sub1.isUnsubscribed()) { observer.onNext(answers.next()) } answers.close() observer.onCompleted() sub1 }}

There are many variations to Subscriptions. One of the important one is CompositeSubscription. You can combine

Illustration 1: Marble diagram representing Observable pushing data to subscribed Observers

Page 34: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

30

multiple subscriptions with CompositeSubscription. When this composite is unsubscribed, all child subscriptions are also unsubscribed.

Composing ObservablesIf there are multiple asynchronous computations, the data may be returned at random times. It becomes very difficult to wait for the result and then combine it with results of other asynchro-nous data. If the caller has to wait, it becomes a blocking and synchronous call. For example, in the above examples, we have used one Observable to create the LdapContext object and in another Observablewe created search results. The search results observable requires the result of first LdapContext Observable. So, we need a way to sequence them together one after another. Many of the Observable’s composing operators come to help here. Simple and common sequencing is one after another. This is usually done by map and flatMap methods on Observable. Map method takes a closure as argument and passes the result of first Observable into the closure. The closure can return another type of result. The following is an example of how search can be sequened:

Code 10: Composing two Observables

ldapConnect("ldap://164.99.86.49:389", "cn=admin,o=novell", "novell").flatMap { ldap -> SearchControls sc = new SearchControls() sc.setSearchScope(SearchControls.SUBTREE_SCOPE) return observableSearch(ldap, "o=novell", "(objectClass=Top)", sc)}

If the closure also returns Observable, the overall resultant Observable becomes as Observable of Observable That is Observable<Observable<SearchResult>>. FlatMap flattens these multiple Observable types into Observable<SearchResult>. This simpli-fies the caller to directly SearchResult instead of subscribing twice nested.

When subscribed, the first Observable pushes the result into the variable “ldap” to the closure after flatMap. The closure con-sumes this variable and passes it to observableSearch. The closure also returns the result of observable Search.

This combined Observable is still “cold”. No code inside this will be executed until an Observer subscribes to it as shown in the following example:

Code 11: Subscribing to Observable

ldapConnect("ldap://164.99.86.49:389", "cn=admin,o=novell", "novell").flatMap { ldap -> SearchControls sc = new SearchControls() sc.setSearchScope(SearchControls.SUBTREE_SCOPE) return observableSearch(ldap, "o=novell", "(objectClass=Top)", sc)}.subscribe {SearchResult result -> println(result.attributes.get("cn")?.get())}

Because the final Observable is of type Observable <SearchResult>, the Observer subscribes to it by calling the sub-scribe method and passing a closure again. This time, the closure takes argument of type “SearchResult” into result. The Observable pushes the SearchResult data into this variable for every result. The closure then prints the “cn” attribute of the result if it is not null. If you look closely, the above code is as similar as synchronous imperative code, but the closures are executed asynchronously. But, what happens if there is a network communication error or any exceptions arising inside these observables. The above exam-ple does not print any error and ignores them. To catch errors, the observer again has to subscribe to errors. The subscribe method has an overload, which takes another argument, Action closure, for calling on Error.

Code 12: Reacting to errors

ldapConnect("ldap://164.99.86.49:389", "cn=admin,o=novell", "novell").flatMap { ldap -> SearchControls sc = new SearchControls() sc.setSearchScope(SearchControls.SUBTREE_SCOPE) return observableSearch(ldap, "o=novell", "(objectClass=Top)", sc)}.subscribe {SearchResult result -> println(result.attributes.get("cn")?.get())} { // second closure to capture errors in all the above Observables Throwable e → e.printStackTrace() throw e}

SchedulersWell! Above examples are kind of cheating us. The above exam-ples in Observable are not yet asynchronous!

By default, the Observables and Observers are synchronous. The sample code in the Observables section run in the same thread as the calling thread. Thus, these examples are actually blocking synchronous code. The real magic happens with the Schedulers.

Schedulers make the above Observables to run in a sepa-rate thread asynchronously. Schedulers are abstractions above Executor interfaces and by default, RxJava provides default schedulers objects for io, computation, threadpools. Observable has a method called “subscribeOn”, which makes the code in “OnSubscribeFunc” of create method to run asynchronously on a different given scheduler thread. Similarly, the other method “observeOn” makes the observer code to run on another sched-uler. So, to make the above example really asynchronous, modify the code as follows:

Code 13: Scheduling on threads to make observables really asynchronous

ldapConnect("ldap://164.99.86.49:389", "cn=admin,o=novell", "novell").flatMap { ldap -> printThread(ldap) SearchControls sc = new SearchControls() sc.setSearchScope(SearchControls.SUBTREE_SCOPE) return observableSearch(ldap, "o=novell", "(objectClass=Top)", sc)}.subscribeOn(Schedulers.io()).observeOn(Schedulers.newThread()).subscribe {SearchResult result -> println(result.attributes.get("cn")?.get())}

It runs the Observable in the IO scheduler thread and the callback observer on a newly created thread. In this way, the calling code can make a judgment whether to run the Observable synchronously or asynchronously. This separation of computation and scheduling greatly gives flexibility to the consumer of this computation. This is useful in the automated unit testing which runs the application logic synchronously to alleviate indeterminate. In production code, this can be made asynchronous.

ConclusionThe Observables are not only used with asynchronous network calls as shown in the examples above, this can be used for any “event” based programming, where some codes run in “reac-tion” to “events”. Because the Observables can be composed and these work with functional closures, this functional pro-gramming model provides powerful abstractions to deal with asynchronous data. Another area this is mostly applied is with GUI tool kits. The GUI tool kits respond to user input “events” asynchronously with callbacks. Also, any GUI updating code should run in the GUI thread. This also introduces callback hell, global state modification, and cross cutting concerns across objects. For example, to react to button clicks, the code can be written as folows.

Jaimon Jose
Jaimon Jose
Jaimon Jose
Jaimon Jose
Jaimon Jose
Jaimon Jose
Jaimon Jose
Jaimon Jose
Jaimon Jose
Jaimon Jose
Jaimon Jose
Jaimon Jose
Page 35: IDC Technical Journal Issue 4

IDC Tech Journal — August 2014

31

Code 14: Reacting to swing button clicks

SwingObservable .fromButtonAction(btnConnect) .observeOn(SwingScheduler.instance) .subscribe { txtFilter.setEnabled(true) }

Furthermore, with Observables, GUI events and network events can be combined together. For example, the below code enables the “search filter” text box only after a user clicks the “btn-Connect” button and a successful connection is made to LDAP by calling the “ldapConnect” closure.

Code 15: Combining GUI events with network events

SwingObservable .fromButtonAction(btnConnect) .flatMap({ btn → ldapConnect("ldap://164.99.86.49:389", "cn=admin,o=novell", "novell") }) .subscribeOn(Schedulers.io()) .observeOn(SwingScheduler.instance) .subscribe { txtFilter.setEnabled(true) } { println (“error connecting to ldap”) } // onError callback

With Observables, writing code asynchronously that reacts to events in close real time is made very simple. However, when writing fully asynchronous code, pay attention to the threads and which thread the code is executed. For simple single valued asynchronous computation, use Futures and in most more than trivial tasks, use Executors. For complex event based asynchro-nous tasks, Observables are unbeatable. A complete example of RxJava swing application to browse a LDAP store is given in my public git repository in link [5].

References1. Groovy Closures: http://groovy.codehaus.org/Closures2. RxJava wiki: https://github.com/Netflix/RxJava/wiki3. Intro to Rx. Rx as introduced in .Net: http://www.introtorx.

com/Content/v1.0.10621.0/01_WhyRx.html#WhyRx4. Functional Reactive in the Netflix API with RxJava: http://tech-

blog.netflix.com/2013/02/rxjava-netflix-api.html5. Sample LDAP browser with Groovy Swing and RxJava: https://

github.com/tsureshkumar/pubsamples/blob/master/groovy/async-groovy/src/main/java/browser.groovy

The ability to store files and access them from anywhere changed the way we store our personal data. There are a number of providers out there who offer anywhere from 2GB free data to 25GB. Many of us would sign up with a few of the providers and get a cool 60-80GB free storage which would be more than sufficient for storing our personal collection of documents, photos and music. Some of the services that you can look at are: Google Drive, Microsoft OneDrive, Dropbox, Box(personal), Apple iCloud, Amazon Cloud Drive and Bitcasa,

One of the concerns everyone has is the privacy and security of your data. There are many solutions to encrypt the data stored in the cloud. For eg. Viivo, boxcryptor, sookasa and DataLocker. One solution that stands out in the crowd is Viivo. This is a free cloud file encryption service developed by PKWare, the guys who invented zip decades back. Viivo uses public key cryptography to secure files on the device itself before they are synchronized to cloud storage provider. Each user has private key generated from user’s password using PBKDF2 HMAC SHA2 and is encrypted with AES-256. They also use independent keys for each of the cloud storage providers. Viivo is very simple to use, secure in the sense data is never transferred to cloud service in clear and available on most desktops and mobile devices. They claim that defense-in-depth is the approach they take with Viivo.

Jaimon Jose
Jaimon Jose
Jaimon Jose
Page 36: IDC Technical Journal Issue 4

ContentsIDC Tech JournalVolume 4 – August 2014