3
GET TOTHE SOURCE SCM as a testing solution SAY WHEN Defining what’s “enough” July 2007 $9.95 www.StickyMinds.com The Print Companion to

SCMasa testingsolution Definingwhat’s “enough”developsense.com/articles/2007-06-TestDesignWithRiskInMind.pdf · GETTOTHESOURCE SCMasa testingsolution SAYWHEN Definingwhat’s

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: SCMasa testingsolution Definingwhat’s “enough”developsense.com/articles/2007-06-TestDesignWithRiskInMind.pdf · GETTOTHESOURCE SCMasa testingsolution SAYWHEN Definingwhat’s

GET TOTHE SOURCESCM as a

testing solution

SAYWHENDefining what’s

“enough”

July 2007 $9.95 www.StickyMinds.com

The Print Companion to

Page 2: SCMasa testingsolution Definingwhat’s “enough”developsense.com/articles/2007-06-TestDesignWithRiskInMind.pdf · GETTOTHESOURCE SCMasa testingsolution SAYWHEN Definingwhat’s

For Rapid Testers, “risk” has at leastthree interpretations. A risk might be “abad thing that might happen”—an erroror omission, like a missing requirementor coding mistake. It might be conse-quences—undesirable aftereffects of thefirst kind of risk; examples include sys-tem crashes, loss to the business, or anunflattering article in the business sectionof the newspaper. Finally, a risk might bea chance that we’re prepared to take thatone of the first two risks won’t happen.“We’re short on time, and few customersstill use Windows 2000, so we’ll take therisk of not testing it.”

Sometimes in testing we find problemsthat surprise us—that we didn’t antici-pate at all. In risk-based testing, we try toanticipate a problem and then test for it.Risk-based test design is based on ques-tions that begin “What if . . . ?” Whatbad things might happen? What goodthings might fail to happen?

Rapid Testers model the risk story infour parts. There’s a risk when a victimmay be affected by some problem causedby a vulnerability in the product that istriggered by some threat. By thinking ex-pansively about each part, we reduce thepossibility that we’ll miss an importantrisk.

For this discussion, a victim is anagent—a person, group, or system thatinteracts with the system under test—that suffers inconvenience, annoyance,damage, or loss because of a problem.General systems thinking reminds us thatthere are many possible notions of who auser might be and how certain usersmight perceive the product or a problemwith it. A few years ago, I worked on abanking application designed to be usedby bank tellers. Those tellers—the peoplewe thought of as “the users”—were cer-tainly potential victims of software bugs,but if those bugs caused delays for thebank customers waiting in line, thenthe bank customers were victims, too. Ifthe bank customers lost confidence in the

bank or left in droves due to slow service,the bank’s shareholders became victims. Ifcoding and testing errors that enabled thebug reflect badly on our reputations, thenwe are possible victims of a problem. Ulti-mately, a risk affects somebody—oftenseveral somebodies.

For problem, I recently found G.F.Smith’s definition: “an undesirable situa-tion that is significant and may besolvable by some agent, although proba-bly with some degree of difficulty” (seethe StickyNotes for references). One wayto model risk is to consider quality crite-ria. Quality is subjective—“value to someperson,” as Gerald Weinberg says. Aproblem threatens that value for some po-tential victim. If the problem doesn’tmatter, or seems to affect only unimpor-tant people, the problem will be deemedto be no problem. There may be substan-tial business risk in declaring a group ofpeople to be unimportant. An experttester not only develops risk ideas andscenarios that reveal the importance of aproblem but also searches for people whomight have been misclassified as unim-portant.

A vulnerability is some weakness in asystem that could allow a problem to oc-cur. Systems are bound to have flaws.People specify them and people codethem. People are fallible and communica-

tion between them is more fallible still. Avulnerability might be the result of a pro-gramming error, an unexpectedlyincompatible library, a specificationweakness, or a misunderstanding of a re-quirement.

Still, a vulnerability doesn’t matterunless (or until) it is triggered. A threat—some state or transition, whether arrivedat deliberately or inadvertently—turns avulnerability into a problem. A vulnera-bility could hide forever in an area of thecode that is never accessed in testing orproduction or that requires an improba-ble set of variables to be in some state,and we’d never know a thing about it.Hackers might trigger a vulnerability de-liberately, but well-meaning people couldtrigger it inadvertently. When they’reconfused, enterprising users will oftensay, “What if I try this?” If no one elsehas tried that, the workaround mightwork—or trigger a serious vulnerability.

In risk-based testing, we considervariations of victim, problem, vulnerabil-ity, and threat and make choices abouthow—or whether—we’re going to testfor them. But the combination of varia-tions of all four aspects is almost alwaysintractably large. Which potentional vic-tims matter? For the ones that don’tmatter, why don’t they matter? What prob-lems might harm or annoy potential victims?

Test Design with Risk in Mindby Michael Bolton

14 BETTER SOFTWARE JULY 2007 www.StickyMinds.com

Test Connection

ISTO

CKPHOTO

Page 3: SCMasa testingsolution Definingwhat’s “enough”developsense.com/articles/2007-06-TestDesignWithRiskInMind.pdf · GETTOTHESOURCE SCMasa testingsolution SAYWHEN Definingwhat’s

“Solo efforts are OK, but the process is

much richer when other members of the

project community are involved.”

Test Connection

www.StickyMinds.com JULY 2007 BETTER SOFTWARE 15

In this column, I've focusedmostly on product risks.

What aspects of the project mightcontribute to our risk model?

Follow the link on the StickyMinds.comhomepage to join the conversation.

Which problems might be tolerable, andwhich ones would be catastrophic? Whatprinciples or mechanisms allow us to recog-nize a vulnerability in the program?How canwe trigger vulnerabilities in the test lab?

Try using a half-hour brainstormingsession to create a risk list. Solo effortsare OK, but the process is much richerwhen other members of the project com-munity are involved. It’s good to create arisk list early in the project, but since welearn about the system and attendantrisks throughout development, it’s usefulto revisit the risk list later as a group ex-ercise. Whenever we identify new risks,we add them to the list.

Try looking at a structural diagramshowing objects, modules, or subsystemsand the interrelationships and dependen-cies between them. Even if you ignore thelabels, you can bring important risks tothe surface: “What does this line mean?What error checking happens on the wayinto this box? What happens if the linebetween these two things gets broken?Does this arrow ever point the otherway? Could this bucket overflow?”

Try exploring the product, alone or inpairs, emphasizing “testing to learn”rather than “testing to find bugs.” Lookfor information about how the productworks and how it might fail. MichaelKelly has a useful mnemonic guide:“FCC CUTS VIDS” (see the StickyNotesfor a link). Look at features, complexity,claims, configuration, user types, testabili-ty, scenarios, variability, interoperability,data, and structure. Tour the menu op-tions; perform actions via the mouse orthe keyboard; tour the folders in which theapplication installs itself; and look throughthe help file, tutorials, and supporting doc-umentation.

The risk list can be used to generatetest ideas or to identify ways of address-ing vulnerabilities before test design andcoding begin, or to guide exploratory testexecution, which is typically much fasterthan writing and executing test scripts.

Most importantly, the risk list can help usprioritize risks. What problem could endup on the front page of the San Jose Mer-cury News? Which risks might cause acustomer to abandon the product or totalk to friends about the problem? Whatproblems have caused tech support calls?What problems have we already seen butbecome accustomed to? Quantitativemeasures of risk might not be useful or ac-curate; qualitative risk assessments mightbe sufficient. One heuristic is to watch forthe cringe when a risk idea is discussed.

If we’ve done a good job of modelingrisk, we’ll probably have more test ideasthan we can act upon. That’s where thethird definition of risk comes in. Somerisks are sufficiently low compared to oth-er risks, so we take a chance on not testingfor them. That’s OK; as Tom DeMarcosays, “If a project has no risks, don’t doit.” {end}

Michael Bolton lives in Toronto andteaches heuristics and exploratory testingin Canada, the United States,and other countries. He is aregular contributor to BetterSoftware magazine. ContactMichael at [email protected].

StickyNotes

For more on the following topic go towww.StickyMinds.com/bettersoftware.

� References