Upload
techwellpresentations
View
80
Download
5
Tags:
Embed Size (px)
DESCRIPTION
Managing a software project is challenging. Managers often need to deal with overwhelming amounts of data and detail but they must also be able to see the big picture at a glance. What can testers do to help? Excellent testing involves composing, editing, narrating, and justifying a story about the product, about the testing, and about the quality of the testing. However, telling a good story doesn't just depend on the raw data; it depends on representing, organizing, and displaying information to illuminate key points, highlight trends, and show variation. Michael Bolton takes you on a tour of approaches to illustrating the testing story, designed to keep the entire project team informed of product status and testing activity, to foster discovery and productive conversation, and to help managers ask better questions and get better answers.
Citation preview
rent Session
Presen d by:
Michae Bolton
Brought to you by:
340 Corporate Way, Suite Orange Park, FL 32073 888‐2
T1 Concur4/8/2014 10:30 AM
“Do You See What I See?”
te l
DevelopSense
300,68‐8770 ∙ 904‐278‐0524 ∙ [email protected] ∙ www.sqe.com
Tester, consultant, and trainer Michael Bolton is the coauthor (with James Bach) of
Rapid Software Testing, a course that presents a methodology and mindset for testing
software expertly in uncertain conditions and under extreme time pressure. Michael is
a leader in the context-driven software testing movement with twenty years of
experience testing, developing, managing, and writing about software. Currently, he
leads DevelopSense, a Toronto-based consultancy. Prior to DevelopSense, he was with
Quarterdeck Corporation, where he managed the company’s flagship products and
directed project and testing teams—both in-house and worldwide. Contact Michael
Michael Bolton DevelopSense
Michael Bolton www.developsense.com
1
Do You See What I See?Michael BoltonDevelopSense
http://www.developsense.com@michaelbolton
2
To test is to compose, edit, narrate, and justifyTHREE stories.
A story about the status of the PRODUCT……about what it does, how it failed, and how it might fail...…in ways that matter to your various clients.
A story about HOW YOU TESTED it……how you configured, operated and observed it……how you recognized problems……about what you have and haven’t tested yet……and what you won’t test at all (unless the client objects)…A story about how GOOD that testing was……the risks and costs of (not) testing……what made testing harder or slower……how testable (or not) the product is……what you need and what you recommend.
Bugs
Issues
Product any good?
How do you know?
Why should I be pleased with your work?
Michael Bolton www.developsense.com
2
LegibilitySimplifying Complexity Without OVERsimplifying
• See James C. Scott, Seeing Like A State
But Patterns Can Be Hard To See!Show multivariate data
Use colorUse size
Use positionUse tables
Show relationshipsShow change over time
Michael Bolton www.developsense.com
3
www.gapminder.com
Also see Google’s Motion Charts
Visualizing Behaviour: Blink Tests
Michael Bolton www.developsense.com
4
Visualizing Estimation• project size (obvious size vs. hidden)• project complexity• development staff (people/skills)• developer relations (are you in good collaboration?)• desired quality level• desired coverage level• availability of requirement and design information• testability• routine interruptions (meetings, email, etc.)• illnesses• vacations• extraordinary interruptions • accounting and auditing• change control processes
– design– code– configuration– build
• test staffing (skills required, needed hiring)• test administration (status reports, documentation, coordination cost)• tester training and learning curve• test data preparation• test platform availability (delivery, installation, stability)• test tooling• the time scale of the estimate
– required accuracy– required precision
• implicit or explicit commitments• relationship between tester and client• time required to prepare the estimate• revision triggers• number and significance of bugs
– investigation and reporting slows down testing
• number of test cycles– driven by development work
• specific new features and design changes– driven by business needs
• recognition of implicit project or test scope– we may not have realized everything that was necessary
• specific new risks– emergent from test activity; may require new testing
• local dependencies– programmers "can't start X until we finish Y"– testers may be waiting on programmers
• third-party dependencies– someone can’t finish Y until we receive Z– programmers or testers may be waiting on outsiders
• implementation/testing sequencing and delays– what is being delivered, and in what order– how quickly are fixes turned around?
• specific slippages– mini-disasters and Black Swans
• competing projects– when our project’s schedule slips, what will happen?
In this list, testing has
• plenty of control over green factors;
• some control over yellow factors, and
• very little or none over the red ones.
So why ask testers for an estimate?
Seeing What’s There (And What’s Misssing)Boxes• Missing/Drop-out• Extra/Interfering• Incorrect• Timing/Sequencing• Contents/Algorithms• Conditional behavior• Limitations• Error Handling
Lines• Missing/Drop-out• Extra/Forking• Incorrect• Timing/Sequencing• Status Communication• Data Structures
Paths• Simplest• Popular• Critical• Complex• Pathological• Challenging• Error Handling• Periodic
Testability!
Web Server
App Server
Browser
DatabaseLayer
Michael Bolton www.developsense.com
5
Seeing What’s Missing• [pointing at a box] What if the function in this box fails?• Can this function ever be invoked at the wrong time?• [pointing at any part of the diagram] What error checking do you
do here?• [pointing at an arrow] What exactly does this arrow mean? What
would happen if it was broken?• [pointing at a box] What actually happens inside this box? What
would happen if this box were updated or replaced?• [pointing at a box] Are there other ways for things to get into or
out of this box? Are there any missing lines?
Web Server
App Server
BrowserDatabase
Layer
Manipulate or change General activity
Prepare or modify tools Inspect or prepare reports
Observe or inspect Force failure
Visualizing Resources Over Time
• Example: with a team of four testers, one session per morning/afternoon, five days a week…
• …model time-based activities and coverage with sticky notes
Web Server
App Server
BrowserDatabase
Layer
Michael Bolton www.developsense.com
6
Visualizing Coverage vs. Resources
Web Server
App Server
Browser
DatabaseLayer
BuildError Monitor
Survey sessions
BuildError Monitor
Coverage analysis
Force fail
Force fail
Man-in-middle
Data generator
Build stressbotsServer stress
Performance dataInspect support problems
Table consistency oracle
Review data
Performance history Build regression suite
History oracle
History oracle
Review Error Output
Beware Visual Bias!
• setup
• browser type & version
• cookies
• security settings
• screen size
• review client-side scripts & applets
• usability
• specific functions
12
Web Server
App Server
Browser
DatabaseLayer
Michael Bolton www.developsense.com
7
Visualizing Coverage Over Time
13
Visualizing Coverage Over Time
14
Michael Bolton www.developsense.com
8
Visualizing Coverage Over Time
15
Visualizing the Role of Interruptions
• A “perfectly effective” testing session is one entirely dedicated to test design and execution– this is exceptional, not the rule; survey or intake sessions might be perfect
• Setup, bug investigation, and reporting take time away from test design and execution
• Suppose that a testing a feature takes two minutes– this is a highly arbitrary and artificial assumption—that is,
it’s wrong, but we use it to model an issue and make a point
• Suppose also that it takes an extra eight minutes to investigate and report a bug– another sweeping generalization in service of the point
• In a 90-minute session, we can run 45 feature tests—as long as we don’t find any bugs
16
Michael Bolton www.developsense.com
9
How Do We Spend Time?(assume all tests below would find bugs if bugs were present)
Module Bug reporting/investigation(time spent on tests that find bugs)
Test design and execution(time spent on tests that find no bugs)
Number of tests
A (good) 0 minutes (no bugs found) 90 minutes (45 tests) 45
B (okay) 10 minutes (1 bug, 1 test) 80 minutes (40 tests) 41
C (bad) 80 minutes (8 bugs, 8 tests) 10 minutes (5 tests) 13
Investigating and reporting bugs means….
or…
…or both.• In the first instance, our coverage is great—but if we’re being assessed on the number of bugs we’re finding, we look bad.• In the second instance, coverage looks good, and we found a bug, too.• In the third instance, we look good because we’re finding and reporting lots of bugs—but our coverage is suffering severely. A system that rewards us or increases confidence based on the number of bugs we find might mislead us into believing that our product is well tested. 17
What Happens The Next Day?(assume 6 minutes per bug fix verification)
Fix verifications
Bug reporting and investigation today
Test design and execution today
New teststoday
Total over two days
A 0 min 0 min (no new bugs) 90 min (45 tests) 45 90
B 6 min 10 min (1 new bug) 74 min (37 tests) 38 79
C 48 min 40 min (4 new bugs) 2 min (1 test) 5 18
Finding bugs today means….
or…
…or both.
…which means….
•…and note the optimistic assumption that all of our fixed verifications worked, and that we found no new bugs while running them. Has this ever happened for you? 18
Michael Bolton www.developsense.com
10
Visualizing Where Time Goes
Resources & References
• Edward Tufte– books: Visual Display of Quantitative Information,
Visual Explanations, Envisioning Information, Beautiful Evidence
– Tufte’s one-day class on presentations
• Hans Rosling– www.gapminder.com
• James C. Scott– Seeing Like a State