SOFTWARE TESTING-UNIT 1  · Web viewTesting can be described as a process used for revealing...

Preview:

Citation preview

SOFTWARE TESTING-UNIT 1

UNIT I – INTRODUCTION

1.1 Testing as an Engineering Activity

Engineering is the application of mathematics and scientific, economic, social, and practical knowledge in order to invent, innovate, design, build, maintain, research, and improve structures, machines, tools, systems, components, materials, processes, solutions, and organizations.

Testing as an engineering activity

Software systems are becoming more challenging to build.

New methods, techniques, and tools are becoming available to support development and maintenance tasks. Because software now has such an important role in our lives both economically and socially, there is pressure for software professionals to focus on quality issues.

Highly qualified staff ensures that software products are built on time, within budget, and are of the highest quality with respect to attributes such as reliability, correctness, usability, and the ability to meet all user requirements.

Using an engineering approach to software development implies that:

• The development process is well understood; • Projects are planned; • Life cycle models are defined and adhered to; • Standards are in place for product and process; • Measurements are employed to evaluate product and process quality; • Components are reused; • Validation and Verification processes play a key role in quality determination; • Engineers have proper education, training, and certification.

*****1.2. Testing as a Process

The software development process has been described as a series of • Phases, • Procedures, and • Steps that result in the production of a software product.• Testing is generally described as a group of procedures carried out to evaluate some aspect

of a piece of software.

Prepared by Dr. R. Kavitha Page 1

SOFTWARE TESTING-UNIT 1

• Testing can be described as a process used for revealing defects in software, and for establishing that the software has attained a specified degree of quality with respect to selected attributes.

Testing objectivesThe process of finding evidence of defects in software systems. Testing is related with two process.• Verification• Validation

Validation: Validation is the process of evaluating a software system or component during, or at the end of, the development cycle in order to determine whether it satisfies specified requirements.It is the process of checking whether expected functionality of a product is achieved.

Verification: Verification is the process of evaluating a software system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.

Validation: Are we building the right product?Verification: Are we building the product right?Validation is normally done with execution based testing. Execution based testing checks the code by using test cases.

Verification is done with activities like Software inspection, Review of software deliverables.

Difference between debugging and TestingDebugging, or fault localization is the process of (1) Locating the fault or defect,(2) Repairing the code, and (3) Retesting the code. Software testing is not debugging.•Testing and debugging, or fault localization, are two very different activities. The debugging process begins after testing has been carried out and the tester has noted that the software is not behaving as specified. The following figure 1.1 shows that the relationship between testing and debugging. • Testing: To determine if a code contains errors.• Debugging: To locate the error and fix it.

Prepared by Dr. R. Kavitha Page 2

SOFTWARE TESTING-UNIT 1

Figure 1.1: Relationship between Testing and Debugging

In the testing domain possible benefits of test process improvement are the following:

smarter testers higher quality software the ability to meet budget and scheduling goals improved planning the ability to meet quantifiable testing goals

*****1.3. Testing axioms

Software testing axioms1. It is impossible to test a program completely. 2. Software testing is a risk-based exercise. 3. Testing cannot show the absence of bugs.4. The more bugs you find, the more bugs there are.5. Not all bugs found will be fixed. 6. It is difficult to say when a bug is indeed a bug. 7. Specifications are never final. 8. Software testers are not the most popular members of a project. 9. Software testing is a disciplined and technical profession.

1. It is impossible to test a program completelyHow many test cases do you need to exhaustively test?– Powerpoint – A calculator – MS Word – Any interesting software! The only way to be absolutely sure software works is to run it against all possible inputs and observe all of its outputs.The number of possible inputs is very large.The number of possible outputs is very large. The number of paths through the software is very large. So, it is impossible to test a program completely.

Prepared by Dr. R. Kavitha Page 3

Testing Yes

Error?

Debug

SOFTWARE TESTING-UNIT 1

2. Software testing is a risk-based exercise If you do not test the software for all inputs (a wise choice) you take a risk. Hopefully you will skip a lot of inputs that work correctly. What if you skip inputs that cause a fault? – Risk: financial loss, security, loss of money,

loss of life! – That is a lot of pressure for a tester! Software testing is a risk-based exercise

If you try to test too much, the development cost becomes prohibitive. If you test too little, the probability of software failure increases and software failures can cost us big time!

3. Testing cannot show the absence of bugs“Program testing can be used to show the presence of bugs”, but never to show their absence! – Testing can show the defects are present, but cannot prove that there are no defects. Even after through testing, we cannot say that the product is 100% defect free. While testingBugs appear in groups, where you see one you will likely find more …Some bugs are just the tip of the iceberg.

4. The more bugs you find, the more bugs there areBoris Beizer coined the term pesticide paradox to describe the phenomenon that the more you test software the more immune it becomes to your test cases. Remedial action for that : continually write new and different tests to exercise different parts of the software.

5. Not all bugs found will be fixedWhy wouldn’t you fix a bug you knew about? – There’s not enough time • It’s not really a bug • Specifications can be wrong • It’s just not worth it

6. It is difficult to say when a bug is indeed a bugBugs that are undiscovered are called latent bugs. The following question raised while testing process, If there is a problem in the software but no one ever discovers it. Will it be consider as a bug? – Parody of “if a tree falls in the forest … does it really make a noise?”

7. Specifications are never finalBuilding a product based on a “moving target” specification is fairly unique to software development.

Prepared by Dr. R. Kavitha Page 4

SOFTWARE TESTING-UNIT 1

*****1.4. Basic DefinitionsErrors: An error is a mistake, misconception, or misunderstanding on the part of a software developer. In the category of developer we include software engineers, programmers, analysts, and testers.For example, a developer may misunderstand a design notation, or a programmer might type a variable name incorrectly. • Error guessing has no rules for testing; it only uses the tester’s previous skills.• In error guessing testers can think of situations where software will fail.For example: • Division by zero• Pressing submit button on form without filling any entries.

Faults (Defects/Bugs): A fault (defect) is introduced into the software as the result of an error. It is an anomaly in the software that may cause it to behave incorrectly, and not according to its specification.

Failures: A failure is the inability of a software system or component to perform its required functions within specified performance requirements. Unexpected output

Test case: A test case in a practical sense is a test-related item which contains the following information: 1. A set of test inputs. These are data items received from an external source by the code under test. The external source can be hardware, software, or human. 2. Execution conditions. These are conditions required for running the test, for example, a certain state of a database, or a configuration of a hardware device. 3. Expected outputs. These are the specified results to be produced by the code under test. Test: A test is a group of related test cases, or a group of related test cases and test procedures. Test Oracle: A test oracle is a document, or piece of software that allows testers to determine whether a test has been passed or failed. A program, or a document that produces or specifies the expected outcome of a test, can serve as an oracle.Test Bed A test bed is an environment that contains all the hardware and software needed to test a software component or a software system. This includes the entire testing environment, for example, simulators, memory checkers, hardware probes, software tools, and all other items needed to support execution of the tests. Software Quality 1. Quality relates to the degree to which a system, system component, or process meets specified requirements.

Prepared by Dr. R. Kavitha Page 5

SOFTWARE TESTING-UNIT 1

2. Quality relates to the degree to which a system, system component, or process meets customer or user needs, or expectations. Quality metric • A metric is a quantitative measure of the degree to which a system, system component, or process possesses a given attribute. • There are product and process metrics. A very commonly used example of a software product metric is software size, usually measured in lines of code (LOC). Two examples of commonly used process metrics are costs and time required for a given task. Quality metrics are a special kind of metric. Quality Metric: A quality metric is a quantitative measurement of the degree to which an item possesses a given quality attribute.

Some examples of quality attributes with brief explanations are the following:

• Correctness—the degree to which the system performs its intended function • Reliability—the degree to which the software is expected to perform its required functions under stated conditions for a stated period of time • Usability—relates to the degree of effort needed to learn, operate, prepare input, and interpret output of the software • Integrity—relates to the system‘s ability to withstand both intentional and accidental attacks • Portability—relates to the ability of the software to be transferred from one environment to another • Maintainability—the effort needed to make changes in the software • Interoperability—the effort needed to link or couple one system to another. Software Quality Assurance Group • The software quality assurance (SQA) group in an organization has ties to quality issues. The group serves as the customers‘ representative and advocate. Their responsibility is to look after the customers‘ interests. • The software quality assurance (SQA) group is a team of people with the necessary training and skills to ensure that all necessary actions are taken during the development process so that the resulting software conforms to established technical requirements. Review • A review is a group meeting whose purpose is to evaluate a software artifact or a set of software artifacts. • The composition of a review group may consist of managers, clients, developers, testers and other personnel depending on the type of artifact under review.

Prepared by Dr. R. Kavitha Page 6

SOFTWARE TESTING-UNIT 1

Test harness

In software testing, a test harness or automated test framework is a collection of software and test data configured to test a program unit by running it under varying conditions and monitoring its behavior and outputs. It has two main parts: the test execution engine and the test script repository.

Test script

A test script in software testing is a set of instructions that will be performed on the system under test to test that the system functions as expected.

There are various means for executing test scripts.

Manual testing. These are more commonly called test cases.

Automated testing

Short program written in a programming language used to test part of the functionality of a software system. Test scripts written as a short program can either be written using a special automated functional GUI test tool (such as HP Quick Test Professional, Borland Silk Test, IBM TPNS and Rational Robot) or in a well-known programming language (such as C++, C#, Tcl, Expect, Java, PHP, Perl, Powershell, Python, or Ruby).

*****1.5. Software Testing PrinciplesTesting principles are important to test specialists/engineers because they provide the foundation for developing testing knowledge and acquiring testing skills.1. The Goal of testing is to find defects before customers find them out.

Software testing helps in finalizing the software application or product against business and user requirements. It is very important to have good test coverage in order to test the software application completely and make it sure that it’s performing well and as per the specifications.While determining the test coverage the test cases should be designed well with maximum possibilities of finding the errors or bugs. The test cases should be very effective. This objective can be measured by the number of defects reported per test cases. Higher the number of the defects reported the more effective are the test cases.Once the delivery is made to the end users or the customers they should be able to operate it without any complaints. In order to make this happen the tester should know as how the customers are going to use this product and accordingly they should write down the test scenarios and design the test cases. This will help a lot in fulfilling all the customer’s requirements.2. Exhaustive testing is not possible. Program testing can only shows the presence of defects, never their absence.

Prepared by Dr. R. Kavitha Page 7

SOFTWARE TESTING-UNIT 1

Yes, Exhaustive testing is not possible. Since all possible combinations of inputs and preconditions is not possible. So instead of doing exhaustive testing, use risks and priorities to focus testing efforts. For example: To set the password, the constraints are:PASSWORD – It should be six character codes First letter – It should be Numeric and remaining are alphanumeric

Possible Ways to form password: First letter –(0-9)Remaining -62 ways (0-9, a-z, A-Z)Total = 10*(62) power of 5 = 9,161,328,320 (valid combinations)If it takes 10seconds to execute one combination means, to execute the full set it requires 2,905 years. It’s not at all possible. So execute only subset of the tests. (Based on priority)

“Program testing can be used to show the presence of bugs”, but never to show their absence! – Testing can show the defects are present, but cannot prove that there are no defects. Even after through testing, we cannot say that the product is 100% defect free.

3. A test in Time - Testing applies all through the software life cycle and is not an end of cycle activity.

Defects in a product can come from any phase. If a wrong or incomplete requirement forms the basis for the design and development of a product, then that functionality can never be realized correctly in the eventual product. Similarly faulty design will also not meet the requirements. In the following figure 1.2, the defects in requirements are shaded in red colour. If these red boxes are carried forward through three of the subsequent stages – design, coding and testing.

Figure 1.2: Defects in different phases of software life cycle

Prepared by Dr. R. Kavitha Page 8

SOFTWARE TESTING-UNIT 1

When this erroneous product reaches the customer after the testing phase, the customer may incur a potential downtime that can result in loss of productivity or business. Note: The cost of building a product and the number of defects in it increases steeply with the number of defects allowed to seep into the later phase. 4. Understand the reason behind the test - Testing is context dependent

If we carry out tests without understanding why we are running them, we will end up in running inappropriate tests that do not address what the product should do.

All the developed software’s are not identical. You might use a different approach, methodologies, techniques and types of testing depending upon the application type.

5. Tests the test first - Defective test is more dangerous than a defective productIt is important to make sure that the tests themselves are not faulty before we start using

them. Test cases/Test should be validated by expert and the “known correct result” should be documented. By comparing this “known correct result” with the result produced by the product, the confidence level of the test and the product can be increased.

6. The Pesticide Paradox - Tests develop immunity and have to be revised constantly.

Repetitive use of the same pesticide mix to eradicate insects during farming will over time lead to the insects developing resistance to the pesticide Thereby ineffective of pesticides on insects. The same applies to software testing. If the same set of repetitive tests is conducted, the method will be useless for discovering new defects.

To overcome this, the test cases need to be regularly reviewed & revised, adding new & different test cases to help find more defects.Note: Tests are like pesticides – Constantly revise their composition to tackle new pests

7. Defect Clustering

Defect clustering, which states that a small number of modules contain most of the defects detected. This is the application of the Pareto Principle to software testing: approximately 80% of the problems are found in 20% of the modules.

By experience, you can identify such risky modules. But this approach has its own problems if the same tests are repeated over and over again, eventually the same test cases will no longer find new bugs.

8. Testing encompasses defect prevention/ – Defect Prevention strategies.

Defect Prevention is a crucial step or activity in any software development process and as can be seen from the below figure 1.3 is pretty much half of our testing tasks:

Prepared by Dr. R. Kavitha Page 9

SOFTWARE TESTING-UNIT 1

Figure 1.3: Defect Prevention

The following are the defect prevention responsibilities for testers in each of the below stages:1) Requirement Specification Review:After understanding customer’s requirements prepare your requirement’s gist.A review is important at this step- the First level of review should be within the team, followed by another level of external review (by a dev or BA or client) to make sure that all the perspectives are in sync.2) Design Review:Design stage can be considered a strategy stage of sorts and going through it will ensure that the QA team understands the pros and cons of each strategy.This kind of critical walkthrough will help unearth any problems with the said strategies and fix them before going further. This can be considered a feasibility study for the strategy (or strategies).3) Code Review: There is not a lot for testers to directly get involved in this phase, but the review does go on here too. Developers carry out code inspections, walkthroughs and reviews before they unit and integration test the application.9. Testing is a fine balance of defect prevention and defect detection

Prepared by Dr. R. Kavitha Page 10

SOFTWARE TESTING-UNIT 1

Fig 1.4: Balance between Defect prevention and Defect detectionPrevention is better than cure”; the same principle applies to defects in the software development lifecycle as well.

It is a common secret that the cost of a defect rises significantly the later in the process it is discovered. Find and fix problems in the lab is about 80-100 times cheaper and 50 times faster than fixing a problem after the software is released into the market.

Identifying and resolving a defect early in the software development lifecycle is undeniably going to improve the quality but it is even better when defects can be prevented in the first place. Preventing defects from occurring requires a structured disciplined methodology; including gathering and analyzing data and conducting root cause analysis, determining and implementing the corrective actions and sharing the lessons learned between projects to avoid future defects.

Note: By preventing defects from happening, you minimize the number of interruptions and delays caused by the team having to find and fix these errors.

10. Intelligent and well planned automation is key to realizing the benefits of testing.

Manual Testing is performed by a human sitting in front of a computer carefully executing the test steps. Automation Testing means using an automation tool to execute test case suite. Manual testing is time consuming, tedious and requires heavy investment in human resources.

Test automation allows, executing more tests in less time, increasing coverage and freeing human testers to do more high-level, exploratory testing. Automation is especially beneficial for test cases that are executed repeatedly.

11. Testing requires talented, committed people who believe in themselves and work in teams.

*****

Prepared by Dr. R. Kavitha Page 11

SOFTWARE TESTING-UNIT 1

1.6. The Tester‟s Role in a Software Development Organization

Testing is sometimes erroneously viewed as a destructive activity. The tester‘s job is to reveal defects, find weak points, inconsistent behavior, and circumstances where the software does not work as expected. It is difficult for developers to effectively test their own code. Developers view their own code as their creation, their ―baby and they think that nothing could possibly be wrong with it!

Tester requires extensive programming experience in order to understand how code is constructed, and where, and what kind of, defects are likely to occur.

Projects should have an appropriate developer/tester ratio. The ratio will vary depending on available resources, type of project. Testers need to work alongside with requirements engineers to ensure that requirements are testable, and to plan for system and acceptance test.

Testers also need to work with designers to plan for integration and unit test.

In addition, test managers will need to cooperate with project managers in order to develop reasonable test plans, and with upper management to provide input for the development and maintenance of organizational testing standards, polices, and goals.

Finally, testers also need to cooperate with software quality assurance staff and software engineering process group members.

In view of these requirements for multiple working relationships, communication and team working skills are necessary for a successful career as a tester.

*****

1.7. Origins of Defects

Software engineers work very hard to produce high-quality software with a low number of defects. The following figure 1.5 shows the causes for defects.

Prepared by Dr. R. Kavitha Page 12

DEFECT SOURCESLack of Education

Poor communicationOversight

TranscriptionImmature Process

IMPACT ON SOFTWARE ARTIFACTSErrorsFaults

Failures

IMPACT FORM USER’S VIEWSPoor –Quality software

User dissatisfaction

SOFTWARE TESTING-UNIT 1

Figure 1.5: Causes for defects.

Education

• The software engineer did not have the proper educational background to prepare the software artifact.

• She/he did not understand how to do something.

• For example, a software engineer who did not understand the precedence order of operators in a particular programming language could inject a defect in an equation that uses the operators for a calculation.

Communication

• The software engineer was not informed about something by a colleague.

• For example, if engineer 1 and engineer 2 are working on interfacing modules, and engineer 1 does not inform engineer 2 that a no error checking code will appear in the interfacing module he is developing, engineer 2 might make an incorrect assumption relating to the presence/absence of an error check, and a defect will result.

Prepared by Dr. R. Kavitha Page 13

SOFTWARE TESTING-UNIT 1

Oversight

• The software engineer omitted to do something. For example, a software engineer might omit an initialization statement.

Transcription:

The software engineer knows what to do, but makes a mistake in doing it.

• A simple example is a variable name being misspelled when entering the code.

Process

The process used by the software engineer misdirected her actions.

• For example, a development process that did not allow sufficient time for a detailed specification to be developed and reviewed could lead to specification defects.

• When defects are present due to one or more of these circumstances, the software may fail, and the impact on the user ranges from a minor inconvenience to rendering the software unfit for use.

Tester’s goal is to discover these defects preferably before the software is in operation. Testers as doctors need to have knowledge about possible defects in order to develop defect hypotheses.

They use the hypotheses to:

Design test cases; Design test procedures; Assemble test sets; Select the testing levels (unit, integration, etc.)appropriate for the tests; Evaluate the results of the tests.

*****

1.8. Defect Classes – The Defect Repository and Test Design – Defect Examples

Defects can be classified in many ways: The defect types and frequency of occurrence should be used to guide test planning, and test design. Execution-based testing strategies should be selected that have particular the strongest possibility of detecting types of defects.

It is important that tests for new and modified software be designed, to detect the most frequently occurring defects. The following figure 1.6 shows the defects of different phases of SDLC model

Prepared by Dr. R. Kavitha Page 14

Coding defect ClassesAlgorithmic and ProcessingControl, Logic and Sequence

DataModule interface

Code documentationExternal hardware and software

Requirement/SpecificationDefect Classes

Functional descriptionFeature

Feature InteractionInterface Description

Design Defect ClassesAlgorithmic and Processing

Control, Logic and SequenceData

Module interface descriptionExternal Interface description

Testing defect ClassesTest HarnessTest design

Test Procedure

Defect RepositoryDefect classes

SeverityOccurrences

Analysis

Analysis

SOFTWARE TESTING-UNIT 1

Figure 1.6: Defects in different phases of SDLC model

1.Requirements and Specificaton Defects

The beginning of the software life cycle is critical for ensuring high quality in the software being developed. Defects injected in early phases can persist and be very difficult to remove in later phases. Since many requirements documents are written using a natural language representation, there are very often occurrences of ambiguous, contradictory, unclear, redundant, and imprecise requirement.

Some specific requirements/specification defects are:

1 . Functional Description Defects

Prepared by Dr. R. Kavitha Page 15

SOFTWARE TESTING-UNIT 1

The overall description of what the product does, and how it should behave (inputs/outputs), is incorrect, ambiguous, and/or incomplete.

Example 1:

Is this User Requirement Clear and Concise?

Bad Example: All screens must appear on the monitor quickly.

How long is quickly?

Good Example: When the user accesses any screen, it must appear on the monitor within two seconds.

Example 2:

Bad Example: On loss of power, the battery backup must support normal operations.

For how long?

Good Example: On loss of power, the battery backup must support normal operations for 20 minutes.

Prepared by Dr. R. Kavitha Page 16

SOFTWARE TESTING-UNIT 1

2. Feature Defects

Features may be described as distinguishing characteristics of a software component or system. Features refer to functional aspects of the software that map to functional requirements as described by the users and clients.

• Feature defects are due to feature descriptions that are missing, incorrect, incomplete.

3. Feature Interaction Defects

• These are due to an incorrect description of how the features should interact.• For example, suppose one feature of a software system supports adding a new customer to a

customer database. This feature interacts with another feature that categorizes the new customer. 4 . Interface Description Defects

• These are defects that occur in the description of how the target software is to interface with external software, hardware, and users.

• For detecting many functional description defects, black box testing techniques, which are based on functional specifications of the software, offer the best approach.

• Random testing and error guessing are also useful for detecting these types of defects.

2. D e s i g n D e f e c t s• Design defects occur during interactions between system components, interactions between

the components and outside software/hardware, or users are incorrectly designed. • When describing these defects we assume that the detailed design description for the software

modules is at the pseudo code level with processing steps, data structures, input/output parameters, and major control structures defined.

1. Algorithmic and Processing Defects

• These occur when the processing steps in the algorithm as described by the pseudo code are incorrect.

• For example, the pseudo code may contain a calculation that is incorrectly specified, or the processing steps in the algorithm written in the pseudo code language may not be in the correct order. In the latter case a step may be missing or a step may be duplicated.

• Another example of a defect in this subclass is the omission of error condition checks such as division by zero.

Example: Algorithmic error

Step 1: Get two variables a, b

Prepared by Dr. R. Kavitha Page 17

SOFTWARE TESTING-UNIT 1

Step 2: Find average using this formula a+b/2. Correct algorithm is Step 1: Get two variables a, bStep 2: Find average using this formula (a+b)/2.

2. Control, Logic, and Sequence Defects

• Control defects occur when logic flow in the pseudo code is not correct.

• For example, branching to soon, branching to late, or use of an incorrect branching condition.

• Other examples in this subclass are unreachable pseudo code elements, improper nesting, improper procedure or function calls.

• Logic defects usually relate to incorrect use of logic operators, such as less than, greater than, etc. These may be used incorrectly in a Boolean expression controlling a branching instruction.

Example:

if ( money < 5 ); do this; end;Error - semicolon at the end of if statement.3. Data Defects

• These are associated with incorrect design of data structures.• For example, a record may be lacking a field, an incorrect type is assigned to a variable or a

field in a record, an array may not have the proper number of elements assigned, or storage space may be allocated incorrectly.

• Software reviews and use of a data dictionary work well to reveal these types of defects.Example#include <stdio.h>   int main() { int array[0], minimum, size, c, location = 1;   printf ("Enter the number of elements in array\n"); scanf("%d",&size);   printf ("Enter %d integers\n", size);  for ( c = 0 ; c < size ; c++ ) scanf("%d", &array[c]);   minimum = array[0];   for ( c = 1 ; c < size ; c++ ) {

Prepared by Dr. R. Kavitha Page 18

SOFTWARE TESTING-UNIT 1

if ( array[c] < minimum ) { minimum = array[c]; location = c+1; } } printf("Minimum element is present at location %d and it's value is %d.\n", location, minimum); return 0;}Error is : int array[0], minimum, size, c, location = 1;  

4. Module Interface Description Defects• These are defects derived from, for example, using incorrect, and/or inconsistent parameter

types, an incorrect number of parameters, or an incorrect ordering of parameters.Example: Step 1: Get two variables a,b Step 2: Find average using this formula a+b/2.Error is b/2 actually it should be (a+b)/2.

5. Functional Description Defects• The defects in this category include incorrect, missing, and/or unclear design elements. • For example, the design may not properly describe the correct functionality of a module.

These defects are best detected during a design review.Example: Without filling the data submit button activated.

6. External Interface Description Defects• These are derived from incorrect design descriptions for interfaces with components, external

software systems, databases, and hardware devices (e.g., I/O devices). • Other examples are user interface description defects where there are missing or improper

commands, improper sequences of commands, lack of proper messages, and/or lack of feedback messages for the user.Example:

Engineer: Sir what about the colour? ( Red)

Customer: ok

Engineer: sir what about the model?

Customer: ya latest model

Engineer: its save petrol…fuel consumption is very low..

Prepared by Dr. R. Kavitha Page 19

SOFTWARE TESTING-UNIT 1

Customer: Is it so?

Engineer: Give feedback sir, about the car

Customer: Sir I need vehicle with two wheels …

I specified clearly in my requirements ..

Engineer:?????????????

3. CODING DEFECTS

• Coding defects are derived from errors in implementing the code.

• Coding defects classes are closely related to design defect classes especially if pseudo code has been used for detailed design.

• Some coding defects come from a failure to understand programming language constructs, and miscommunication with the designers.

1. Algorithmic and Processing Defects

Adding levels of programming detail to design, code-related algorithmic and processing defects would now include unchecked overflow and underflow conditions, comparing inappropriate data types, converting one data type to another, incorrect ordering of arithmetic operators (perhaps due to misunderstanding of the precedence of operators), misuse or omission of parentheses, and incorrect use of signs.

2. Control, Logic and Sequence Defects

On the coding level these would include incorrect expression of case statements, incorrect iteration of loops (loop boundary problems) and missing paths.

3. Typographical Defects

These are principally syntax errors, for example, incorrect spelling of a variable name, which is usually detected by a compiler, self-reviews, or peer reviews.

4 . Initialization Defects

• These occur when initialization statements are omitted or are incorrect.

• This may occur because of misunderstandings or lack of communication between programmers, and/or programmers and designers’ carelessness, or misunderstanding of the programming environment.

Prepared by Dr. R. Kavitha Page 20

SOFTWARE TESTING-UNIT 1

5. Data-Flow Defects

• There are certain reasonable operational sequences that data should flow through.

• For example, a variable should be initialized, before it is used in a calculation or a condition. It should not be initialized twice before there is an intermediate use.

• A variable should not be disregarded before it is used.

6. Data Defects

• These are indicated by incorrect implementation of data structures.

• For example, the programmer may omit a field in a record, an incorrect type or access is assigned to a file, an array may not be allocated the proper number of elements.

• Other data defects include flags, indices and constants set incorrectly.

7. Module Interface Defects

• As in the case of module design elements, interface defects in the code may be due to using incorrect or inconsistent parameter types, an incorrect number of parameters, or improper ordering of the parameters.

• In addition to defects due to improper design, and improper implementation of design, programmers may implement an incorrect sequence of calls or calls to nonexistent modules.

8. Code Documentation Defects

• When the code documentation does not reflect what the program actually does, or is incomplete or ambiguous, this is called a code documentation defect.

• Incomplete, unclear, incorrect, and out-of-date code documentation affects testing efforts.

• Testers may be misled by documentation defects and thus reuse improper tests or design new tests that are not appropriate for the code.

• Code reviews are the best tools to detect these types of defects.

9. External Hardware, Software Interfaces Defects

These defects arise from problems related to system calls, links to databases, input/output sequences, memory usage, resource usage, interrupts and exception handling, data exchanges

Prepared by Dr. R. Kavitha Page 21

SOFTWARE TESTING-UNIT 1

with hardware, protocols, formats, interfaces with build files, and timing sequences (race conditions may result).

4. T e s t i n g D e f e c t s

Defects are not confined to code and its related artifacts. Test plans, test cases, test harnesses, and test procedures can also contain defects.

Defects in test plans are best detected using review techniques.1. Test Harness Defects• In order to test software, especially at the unit and integration levels, auxiliary code must be developed. This is called the test harness or scaffolding code. • The test harness code should be carefully designed, implemented, and tested since it a work product and much of this code can be reused when new releases of the software are developed.

2. Test Case Design and Test Procedure Defects

• These would encompass incorrect, incomplete, missing, inappropriate test cases, and test procedures. These defects are again detected in test Plan.

• Sometimes the defects are revealed during the testing process itself by means of a careful analysis of test conditions and test results. Repairs will then have to be made.

*****

1.9 TMM L e v e l s

• A set of maturity goals. The maturity goals identify testing improvement goals that must be addressed in order to achieve maturity at that level. To be placed at a level, an organization must satisfy the maturity goals at that level. The TMM levels and associated maturity goals are shown in Figure 1.7.

• Supporting maturity subgoals. They define the scope, boundaries and needed accomplishments for a particular level.

• Activities, tasks and responsibilities (ATR). The ATRs address implementation and organizational adaptation issues at each TMM.

Prepared by Dr. R. Kavitha Page 22

Level 5: Optimization /Defect Prevention and Quality ControlTest process optimizationQuality controlApplication of process data for defect prevention

Level 4: Management and MeasurementSoftware quality evaluationEstablish a test measurement programEstablish an organization wide review program

Level 3: IntegrationControl and monitoring the testing processIntegrate testing into the software life cycle Establish a technical training programEstablish a software test organization

Level 2: Phase DefinitionInstitutionalize basic testing techniques and methodsInitiate a test planning processDevelop testing and debugging goals

Level 1: Initial

SOFTWARE TESTING-UNIT 1

Internal structure of TMM maturity levels

Figure 1.7: TMM levels and associated maturity goals

Prepared by Dr. R. Kavitha Page 23

Recommended