2
SOFTWARE TESTING, VERIFICATION AND RELIABILITY Softw. Test. Verif. Reliab. 2003; 13:257–259 (DOI: 10.1002/stvr.283) Book Reviews EFFECTIVE SOFTWARE TESTING: 50 SPECIFIC WAYS TO I MPROVE YOUR TESTING. By Elfriede Dustin. Published by Addison-Wesley, Boston, MA, U.S.A., 2002. ISBN: 0-201-79429-2, 271 pages. Price: U.K. £26.99, U.S.A. $34.99, Soft Cover. A little while ago, I reviewed Lessons Learned in Software Testing by Kaner, Bach and Pettichord [1] for this journal. In that book, nuggets of real experience were offered to tempt experienced testers into thinking about why they test and how they test. That, I felt, was a book for experienced testers to mull over their craft; I would not want to offer it to a project manager or a new tester to help them understand testing. In contrast, we now have Dustin’s Effective Software Testing: 50 Specific Ways to Improve Your Testing. This is a book I would happily recommend to some- one who did not have specific testing experience but wanted to understand how to improve testing in their organization: for example, a project or development manager. The book is presented as a series of 50 easily digestible and separate ideas, grouped into 10 chapters: requirements phase; test planning; the testing team; the system architecture; test design and documentation; unit testing; automated testing tools; automated testing: selecting best practice; non-functional testing; and managing test execution. Within each chapter, a set of between four and eight sections each proposes and explains an action item, simply and clearly. For example, Chapter 2 on test planning contains Item 11: Plan the Test Environment. In this item, Dustin describes the test environment, compares and contrasts it with the production environ- ment, and lists the information required in order to plan the test environment. In two pages, here is a reminder of the scope of an activity which those involved in projects so often forget about until late in the project. The items often contain useful worked examples and templates, as in Item 12: Estimate Test Preparation and Execution Time. This includes a useful set of example test estimation methods, in each case noting the possible problems with that method. The chapter on the test team is good, and includes an item about evaluating tester effectiveness, including some analysis of the reasons why testing and testers might appear ineffective and what can be done to improve individually as well as what improvements can be made in the team and organization. The two chapters on test automation will be a useful introduction to the pleasures and pitfalls of tools for anyone needing an overview. Although much of the book is directed at improve- ments in the work of specialist testers by implication working at system and acceptance test stages, Dustin does not neglect other stages where verification can take place. Requirements verification is dealt with in Chapter 1, together with the cost benefit of early tester involvement. The chapter on unit testing provides use- ful information both for the developer’s own testing and for ‘buddy testing’. I would also direct developer- testers to the test design and documentation chapter for its simple introduction to proven techniques such as orthogonal arrays and boundary value analysis. The book focuses on testing as a verification activity. It only mentions validation activities and non-testing verification, such as reviews, in passing. I imagine from the remarks in the preface that this is in order to confine the scope of the book—it does not claim to solve all the issues that surround quality in a software project, and Dustin rightly points out that the proposed improvements need to be supported in a culture and Copyright c 2003 John Wiley & Sons, Ltd.

Book Reviews

Embed Size (px)

Citation preview

SOFTWARE TESTING, VERIFICATION AND RELIABILITYSoftw. Test. Verif. Reliab. 2003; 13:257–259 (DOI: 10.1002/stvr.283)

Book Reviews

EFFECTIVE SOFTWARE TESTING: 50 SPECIFICWAYS TO IMPROVE YOUR TESTING. By ElfriedeDustin. Published by Addison-Wesley, Boston, MA,U.S.A., 2002. ISBN: 0-201-79429-2, 271 pages.Price: U.K. £26.99, U.S.A. $34.99, Soft Cover.

A little while ago, I reviewed Lessons Learned inSoftware Testing by Kaner, Bach and Pettichord [1] forthis journal. In that book, nuggets of real experiencewere offered to tempt experienced testers into thinkingabout why they test and how they test. That, I felt, wasa book for experienced testers to mull over their craft; Iwould not want to offer it to a project manager or a newtester to help them understand testing.

In contrast, we now have Dustin’s Effective SoftwareTesting: 50 Specific Ways to Improve Your Testing.This is a book I would happily recommend to some-one who did not have specific testing experience butwanted to understand how to improve testing in theirorganization: for example, a project or developmentmanager.

The book is presented as a series of 50 easilydigestible and separate ideas, grouped into 10 chapters:

• requirements phase;• test planning;• the testing team;• the system architecture;• test design and documentation;• unit testing;• automated testing tools;• automated testing: selecting best practice;• non-functional testing; and• managing test execution.

Within each chapter, a set of between four and eightsections each proposes and explains an action item,simply and clearly. For example, Chapter 2 on testplanning contains Item 11: Plan the Test Environment.In this item, Dustin describes the test environment,

compares and contrasts it with the production environ-ment, and lists the information required in order to planthe test environment. In two pages, here is a reminder ofthe scope of an activity which those involved in projectsso often forget about until late in the project.

The items often contain useful worked examples andtemplates, as in Item 12: Estimate Test Preparationand Execution Time. This includes a useful set ofexample test estimation methods, in each case notingthe possible problems with that method.

The chapter on the test team is good, and includesan item about evaluating tester effectiveness, includingsome analysis of the reasons why testing and testersmight appear ineffective and what can be done toimprove individually as well as what improvements canbe made in the team and organization.

The two chapters on test automation will be a usefulintroduction to the pleasures and pitfalls of tools foranyone needing an overview.

Although much of the book is directed at improve-ments in the work of specialist testers by implicationworking at system and acceptance test stages, Dustindoes not neglect other stages where verification cantake place. Requirements verification is dealt with inChapter 1, together with the cost benefit of early testerinvolvement. The chapter on unit testing provides use-ful information both for the developer’s own testingand for ‘buddy testing’. I would also direct developer-testers to the test design and documentation chapter forits simple introduction to proven techniques such asorthogonal arrays and boundary value analysis.

The book focuses on testing as a verification activity.It only mentions validation activities and non-testingverification, such as reviews, in passing. I imaginefrom the remarks in the preface that this is in orderto confine the scope of the book—it does not claim tosolve all the issues that surround quality in a softwareproject, and Dustin rightly points out that the proposedimprovements need to be supported in a culture and

Copyright c© 2003 John Wiley & Sons, Ltd.

258 BOOK REVIEWS

organization that will allow this approach. However,I think there is enough here to point the intelligenttest manager to the arguments needed to support thechanges. For example, the testers need a good configu-ration management system and requirements traceabil-ity, but in fact all the project team would benefit fromthose improvements.

Useful references provide further reading, whilstnavigation around the book is enhanced by the indexand a full contents list.

In summary, this is a book that is worth buying, andworth sharing with your project and development man-agers, to introduce improvements both in the testingprocess, and in the planning and estimation for testingwithin the project.

REFERENCE

1. Kaner C, Bach J, Pettichord B. Lessons Learned inSoftware Testing: A Context-Driven Approach. Wiley:New York, 2002.

ISABEL EVANSIE Testing Consultancy Ltd,

The Courtyard,Restdale House,

32/33 Foregate Street,Worcester WR1 1EE, U.K.

RISK-BASED E-BUSINESS TESTING. By Paul Ger-rard and Neil Thompson. Published by ArtechHouse, Norwood, MA, U.S.A., 2002. ISBN:1-58053-314-0, 399 pages. Price: U.K. £55.00,U.S.A. $79.00, Hard Cover.

The title looks suspiciously buzzword-laden, but in factthis is a very good book. E-business refers to ‘theconducting of business on the Internet’, an enterpriseon which many businesses are currently embarked.E-business software is in the public domain, open toinexperienced and untrained users, to criminals and to apotentially unimaginably large audience. Thus, systemusability, security and reliability take on new levels ofimportance.

Risk-based testing is not an approach exclusive toE-business. The justification for marrying the two isthat E-business projects carry more risk, in terms of theimmaturity and fast changing nature of the technologiesthey use, the speed demanded of their development andtheir exposure on the Internet. The risk analysis aims

to provide more meaningful information to help decidewhen testing can stop than more traditional measuressuch as defect counts.

The book has four parts: The Risk-Based TestingApproach, Risk-Based E-Business Testing, E-BusinessTest Techniques and Tools and Making E-Business Test-ing Happen. The first defines an informal but methodi-cal risk-based test process after explaining why risk is a‘better language for software testing’. The second partdescribes the risks particular to E-business software anda framework for constructing a process for dealing withthose risks. Part three details 24 specific test techniquesin the categories: static testing, Web page integrationtesting, functional testing, service testing (performance,reliability and manageability), usability assessment,security testing, large-scale integration testing and post-deployment monitoring. The fourth and final part of thebook turns to the project manager’s job of pulling allthis together.

I find the book’s structure to have a pleasing coher-ence which will help the reader build an E-businesstesting strategy. It is evident that Gerrard, the authorof the first three parts of the book, has a great deal ofexperience of test projects in various roles. An expe-rienced testing practitioner or manager will find muchto agree with, while newcomers to these roles shouldfind the advice invaluable. The sections on risk and testtechniques make the book useful to test practitioners aswell as strategists. The risks addressed by each of thetest techniques are listed to form a comprehensive guidewhich will allow a good balance of test techniques to bechosen.

While even in the techniques section there is alot of useful information generally pertaining to risksand testing, there is also a substantial amount de-voted to specific Internet technologies or testing tools.This makes the book more useful and relevant now,and given its title is expected. However, I expect thisto make parts of the book out-of-date fairly quickly.

Like the authors, I find the risk-based approach totesting logical and appealing, as the best way to answerthe all-important question of ‘when (and why) shouldwe ever stop testing?’ The most important job of thetester becomes to ‘provide information for a risk-baseddecision on release’. That information contains bothpositive and negative messages: some risks have beenaverted, while others remain. The authors explain howdifficult situations that test teams often face, such asalways being the bearer of bad news, can be helpedor avoided by the risk-based approach. As with manyof the arguments in this book, those about risk arejustified and put into context with a comprehensive list

Copyright c© 2003 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab. 2003; 13:257–259