Upload
others
View
5
Download
0
Embed Size (px)
Citation preview
T2 Concurrent Session Thursday 10/02/2008 9:45 AM – 10:45 AM
Patterns and Practices for Model-Based Testing
Presented by:
Keith Stobie Microsoft
Presented at: STARWEST 2008
September 29 – October 3, 2008, Anaheim, CA, USA
330 Corporate Way, Suite 300, Orange Park, FL 32043 888-268-8770 904-278-0524 [email protected] www.sqe.com
Keith Stobie
Keith Stobie is a test architect at Microsoft where he plans, designs, and reviews software architecture, process, and tests for Protocol Engineering. Prior work included Live Search and Windows Communication Foundation. Over the past twenty-five years he has focused on software testing distributed systems including Tandem Fault Tolerant systems, Informix Parallel database, and transactional and collaborative software at BEA Systems. Keith provides training on inspections and quality process, and test training, strategy, methodology, design, tools, and automation. Keith has mentored and coached hundreds of professionals in the field. He writes and speaks to conferences around the world on software engineering, SQA, and testing.
Patterns and Practices for Model-Based Testing
MBT = More Better Testing
Keith Stobie, Test Architect
Protocol Engineering Team, Microsoft
2008-10-02 1
Apology
• “Are these slides out of sync with what you’ve got on paper in front of you? Sorry.
• Learning evolves; printed paper doesn’t.”- Michael Bolton
• Updated handouts by email from [email protected]
2008-10-02 2
Protocol Engineering
• We verify the 220+ specifications (30K+ pages) http://msdn2.microsoft.com/openprotocols
• 300 contractors (Hyderabad & Beijing)
• 15 test suites/month via Protocol Quality Assurance Process (PQAP)
• Own Netmon (Protocol Analyzer) andSpec Explorer 2007 (advanced Modeling tool)
2008-10-02 3
Tips and Tricks for MBT• Start very small – Too small is just about right• If there is a test case, there is a model.
If you can write a test case the model is in your head.• What are the steps in a test case?
• These can be Actions in the model.
• What are the different types of an action? • These can be Action parameters
• What changes when the action occurs? • These can be the State Variables
• When can / does an action have an effect? • These are Preconditions / Transition rules which limit action occurrence
• Minimize the number of state variables and values• There is no perfect model – only useful ones.• Review the model with the feature team
2008-10-02 4
Modeling Protocols
• View the system in terms of its possible traces – A trace is a sequence of actions
• Choose a vocabulary of actions– An action is the form actionSymbol(arg1, arg2…)
ala Keyword-driven or Action-based test automation
• Choose a set of state variables– Define the initial state
• Define the possible traces by update rules– Under what conditions can an action occur?– What happens to the system state when it does?
2008-10-02 5
What we learned testing Protocols
• Better tests via Model Based Testing (MBT) concepts
• Better tests via Good abstractions
• Patterns for both MBT and traditional testing.
Book: Model-Based Software Testing and Analysis with C#
Free (shared source) tool: NModel
2008-10-02 6
Abstract Actions
File.Open(filename)
2008-10-02 7
What To Do
• Control (inputs) vs. Observation (outputs) viewpoints for building tests
• Create adapters to separate detail concerns from testing concerns.
• Tracing requirements from specification thru test cases into test logs.
• Better test validation.
2008-10-02 8
Control, Observation, and Verdict
9
System Under Test
Test Cases
Expected
ImplementationFeedback
Spec/TestFeedback
Co
ntro
l
Test Oracle
2008-10-02
Oracle, not implementation
• Test SquareRoot(float x) Don’t re-implement square root (which is difficult). Much simpler check of correctness
• Y = SquareRoot(X)
• Assert( Absolute(Y*Y – X) < epsilon)
Control (input)
Observe (output) Oracle (expected)
Verdict
2008-10-02 10
Verifying a Sort
sortedArray = SortAscending(inputArray);
// All elements present and in sorted order
prev=start;
for (next=start+1; next<=end; increment next ) {
AssertIsTrue(sortedArray[prev]<sortedArray[next]);
inputArray = inputArray – sortedArray[prev];
prev=next;
}
AssertIsEmpty(inputArray – sortedArray[prev]);
2008-10-02 11
Server Message Block (SMB) protocol
\\<node>\. . .
2008-10-02 12
Abstracting a File System
/// File content.
Sequence<byte> fileContent;
/// Opened files
SetContainer<int> openFileIds;
/// Locked ranges on this file.
SetContainer<SMB2Lock> lockedRanges;
/// Oplock container on this file.
MapContainer<int, SMB2Oplock> oplocks;
2008-10-02 13
Test adapters
Adapter interface
Adapter implementation
SUT(System under test)
•Abstracts SUT functionality•Contract between teams
•Pluggable•Different interfaces (GUI, HTTP, etc.)
TCTest Cases(Traditional or MBT)
TC TC
TC TC TC
2008-10-02 14
Adapter Considerations
• State/Context in the high level test code
• Example Adapter work (state/context independent)– Signing and Encrypting messages
– Verifying data structures or formats
• Keyword/ Action Word testing Right level of test specification [Buwalda] “specify those, and only those, high-level details that are relevant for the test. “– Detailed enough to clearly show the intention and logic of
the test case: what is the input, what is verified, etc
– At the same time hiding as many details as possible that are not relevant for the test
2008-10-02 15
Adapter Implementation
Chosen via configuration (XML) file:
• Interactive (Manual) pop up with instructions and input
• Automated
– Script (cmd, Powershell, etc.)
– Managed (C#, etc.)
2008-10-02 16
Adapter Interface (C#)
public interface ICleanup : Iadapter {
/// <summary>Removes files created</summary>
/// <param name=“files“>Files list to remove</>
/// <returns>error code (0 for success)</returns>
[MethodHelp(“Remove files created. Enter 0 for successful return or non zero error code.")]
int RunCleanup(string[] files);
}
2008-10-02 17
AuditFile OutputFile
Interactive Adapter (default)
Remove files created. Enter 0 for successful return or non zero error code.
files
Return Value
2008-10-02 18
Abstraction Guidelines• Don’t hard code values.
Create variables (as abstraction of values) for user name, system name, etc.
• Hide conceptually unimportant details.– Examples: transport used for a protocol,
how you accomplished configuration of a SUT.
– Lower level APIs should never explicitly appear in adapter interface
• Atomic operations Sufficient detail to adequately describe interaction with SUT.
• Not tied to test cases.Understandable and implementable without context of test cases which use it. – Rule of thumb: An adapter method which has a parameter which
indicates the test case or scenario is never adequate. 2008-10-02 19
Scenarios// NegotiateRequest(long messageId, short dialect, bool requireSign)
NegotiateRequest(_, 0, _);
// NegotiateResponse (long messageId, MessageStatus status, bool requireSign)
NegotiateResponse(_, MessageStatus.SUCCESS, _);
// SessionSetupRequest(long messageId, int sessionId, bool requireSign)
SessionSetupRequest(_, 0, false);
// SessionSetupResponse (long messageId, MessageStatus status, int sessionId, bool requireSign, SessionFlags sessionFlags)
SessionSetupResponse(_, MessageStatus.SUCCESS, _ , _ , _ );
// LogoffRequest(long messageId, int sessionId, bool requireSign)
LogoffRequest(_, 1, _);
// LogoffResponse (long messageId, MessageStatus status, bool requireSign)
LogoffResponse(_, MessageStatus.SUCCESS, _);
“_” = Don’t Care (model chooses)
2008-10-02 20
Requirements Tracing
Specification
Requirements
Test Code (cases)
Test Logs
Coverage
2008-10-02 21
Specification to Requirements
• In the document, highlight statements that constitute a requirement. Example:
If inputs are incorrect, then return InvalidParameters, otherwise process them and return success
• Annotate statement to provide necessary context.
R2 Otherwise [if parameters are valid] process them [the parameters] and return success
R1
R2
2008-10-02 22
Requirements in the Code
status = TestMethod("Special Chars @# not allowed");
CaptureRequirementIfAreEqual<int>(
IllegalParams, status, 1,
"If inputs are incorrect, then return InvalidParameters");
status = TestMethod("All legal chars");
CaptureRequirementIfAreEqual<int>(
Success, status, 2,
"Otherwise [if parameters are valid] process them
[the parameters] and return success");
Requirement Number
Requirement text
Actual ValueExpected Value
2008-10-02 23
Requirements in the Log
[EnterMethod] ExampleRequirementsTest
*Comment+ TestMethod with "Special Chars @# not allowed“
[CheckSucceeded] Assert.AreEqual succeeded. R1: If inputs are incorrect, then return InvalidParameters
[Checkpoint] R1
[Comment] TestMethod with "All legal chars”
[CheckSucceeded] Assert.AreEqual succeeded. R2: Otherwise [if parameters are valid] process them [the parameters] and return success
[Checkpoint] R2
[ExitMethod] ExampleRequirementsTest
Report tool keys off checkpoints as being verified.
2008-10-02 24
Test case failure in log
[Comment] TestMethod with "All legal chars”
[CheckFailed] Assert.AreEqual failed. Expected: Success, Actual: IllegalParamsR2: Otherwise [if parameters are valid] process them [the parameters] and return success
2008-10-02 25
Requirements ReportRequirement Type Count/Percent
Testable requirements 317(100.00%)
Verified requirements 280(88.33 %)
Unverified requirements 37(11.67 %)
Details:
Req ID Description Status
R1 If inputs are incorrect, then return InvalidParameters Verified
R2Otherwise [if parameters are valid] process them [the parameters] and return success Verified
R3 Log all invalid inputs to the audit log Unverified
2008-10-02 26
Verification
R1: SetA(x) stores the value ‘x’ in A.
R2: GetA() retrieves the value from A.
SetA(x);
Capture(1, ‚SetA(x) stores the value ‘x’ in A.‛)
CaptureRequirementIfAreEqual(x, GetA(), 2, ‚GetA() retrieves the value from A.‛);
Capture(1, ‚SetA(x) stores the value ‘x’ in A.‛)
Capture(1, ‚SetA(x) stores the value ‘x’ in A.‛)
2008-10-02 27
Requirements Validation
You cannot cover requirements based on assumptions. – "The MS-XY protocol MUST support port 443 and port
3388 endpoints" – You cannot claim that your model has covered the
requirement just because you assume that the adapter is using that port.The only code that covers the requirement is where you successfully establish a connection on that port (or get back a response after having sent a request on that port). The model has no idea of what port was used by the adapter and hence cannot claim that the requirement has been covered.
2008-10-02 28
Verification Patterns
• Use at least one assertion before a requirement capture.
• Testing is the verification of expectations.
Our expectations are around the outputs.
– Use conditionals involving inputs to form expectations for assertions about the outputs.
2008-10-02 29
GUI Example
1. Enter RowID number.
2. If error message displayed, THEN if RowID was 0
then pass else fail.ELSE if RowID was not 0
then pass else fail.2008-10-02 30
Testing Expected from outputpublic void multiFailWrong(int requestId) {
UInt32 status = 0; int numOfDeletedRows = 0;
try {
numOfDeletedRows = DeleteRow(requestId);
} catch (CException exp)
{status = exp.GetErrorCode();}
if (0 != status){
AssertAreEqual<int>(0, requestId, 539,
"When processing Delete Row if dwRowId parameter is zero, the CA MUST fail the request");
} else {
AssertAreNotEqual<int>(0, requestId, 541,
"When processing the DeleteRow if dwRowId is non zero the CA MUST delete the corresponding extensions table row");
Conditional based on output (status)
Assertion on the Input
delete the corresponding extensions table row
Observation without oracle/verdict!
2008-10-02 31
GUI Example
1. Enter RowID number.
2. If RowID was 0THEN if error message displayed,
then pass else fail.ELSE if error message displayed
then fail,else if rowIDs for number of rows deleted
do not exist, then pass else fail.
2008-10-02 32
Testing Expected based on inputpublic void multiFailRight(int rowId){
uint status; int numOfDeletedRows = 0;
try { numOfDeletedRows = DeleteRow(rowId);
status = 0;
} catch (CException exp)
{status = exp.GetErrorCode();}
if (rowId == 0) {
CaptureRequirementIfAreNotEqual<uint>(0, status, 539,
"When processing the DeleteRow if dwRowId parameter is zero, the CA MUST fail the request");
} else {
AssertAreEqual<uint>(0, status, "When processing the DeleteRow if dwRowId is non zero the request must succeed");
CaptureRequirementIfTrue(
correspondingRowsDeleted(rowId, numOfDeletedRows),
541, "When processing the DeleteRow if dwRowId is non zero the CA MUST delete the corresponding extensions table row");
Conditional based on input (rowid)
Assertion on the output (status)
Observation
Control
Oracle/Verdict
2008-10-02 33
Model or Traditional test?Choose traditional testing or MBT based on most cost
effective in terms of total cost of ownership (that is taking into account maintenance costs and not just initial development costs).
1. Small number of states or stateless?Yes traditional No Modeled
2. Modes or message sequence matters?Yes SUT has sequential constraints.
MBT : draw a state chart that expresses thisNo Verify with Non-Modal Class Test pattern [TOOS]
3. Cover all combinations of boundary conditions and typical values with less than few dozen cases?Yes traditional testing practicalNo Model domain boundaries and let the computer do the dirty
work.2008-10-02 34
Issues with MBT
• Requires programming knowledge(for adapter implementation or using Spec Explorer) and specifications of expected behavior.
– It is at the opposite end of pure exploratory testing.
• Measuring prevented bugs is difficult.Hard to get credit for quality upstream.
2008-10-02 35
Advanced MBT
• MBT beyond simple Finite State Machines allows handling of non-determinism.
• Free NModel tool to start learning.
• Microsoft’s Protocol Engineering Team work uses Spec Explorer 2007 (SE07)
• SE07 is an even more powerful mechanism than NModel. Visual Studio Team System plug-in.
• Modeled Computation LLC has trained hundreds of testers and developers in MBT
2008-10-02 36
Summary
• Trace your requirements.
Multiple coverage types : Requirements, Data, . . .
• Abstract your operations.
Use adapters to separate detail concerns from testing concerns.
• Validate whole requirement.
Use observations and predicted outputs from controlled inputs.
2008-10-02 37
References
• NModel at http://www.codeplex.com/NModel
• Book: Model-Based Software Testing and Analysis with C#http://staff.washington.edu/jon/modeling-book/
• Older version of Spec Explorer (2004) using Spec#http://research.microsoft.com/projects/specexplorer
• Early Adopter Program (EAP) info for SE VS:[email protected]
• Updated slides: [email protected]
2008-10-02 38
Background
• Model-Based Quality Assurance of Windows Protocol Documentation, Wolfgang Grieskamp, et al, ICST 2008 (IEEE)
• Key Principles of Test Design, Hans Buwaldawww.logigear.com
• Modeling: A Picture's Worth 1000 Words, PNSQC 2002, Quality Tree Software, Inc.
• Testing Object-Oriented Systems (TOOS), Robert V. Binder, Addison-Wesley 2000
2008-10-02 39
What’s a Model?A model:
• Is an abstraction or simplified representation of the system from a particular perspective
• Supports investigation, discovery, explanation, prediction, or construction
• May be expressed as a description, table, graphical diagram, or quantitative mathematical model
• Is not necessarily comprehensive
From: PNSQC 2002 “Modeling: A Picture's Worth 1000 Words” Copyright (c) 2002, Quality Tree Software, Inc.
2008-10-02 40
Models in Everyday Life
Inspired by :“Modeling: A Picture's Worth 1000 Words” Copyright (c) 2002, Quality Tree Software, Inc.
2008-10-02 41
Examples of Models
• Flow Charts
• Data Flow Diagrams
• Entity- Relationship Diagrams
• State Diagrams
• Deployment Diagrams
• Class Diagrams
• Use Cases
• Activity Diagrams
• State Transition Tables
2008-10-02 42
SIP Example
http://tools.ietf.org/html/rfc3261 e.g. http://en.wikipedia.org/wiki/Session_Initiation_Protocol
2008-10-02 43
SE 2007 VSTS Plug In
2008-10-02 44
Model Code State Space
Control
Tests