Why Testing

Embed Size (px)

Citation preview

  • 8/8/2019 Why Testing

    1/16

    Why Testing?

    * To unearth and correct defects.* To detect defects early and to reduce cost of defect fixing.

    * To ensure that product works as user expected it to.* To avoid user detecting problems.

    Test Life Cycle* Identify Test Candidates

    * Test Plan* Design Test Cases* Execute Tests* Evaluate Results* Document Test Results

    * Casual Analysis/ Preparation of Validation Reports* Regression Testing / Follow up on reported bugs.

    Testing Techniques

    * Black Box Testing* White Box Testing* Regression Testing* These principles & techniques can be applied to any type of testing.

    1.Black Box Testing

    * Testing of a function without knowing internal structure of the program.2.White Box Testing * Testing of a function with knowing internal structure of the program.3.Regression Testing * To ensure that the code changes have not had an adverse affect to the other modulesor on existing functions.Functional Testing

    * Study SRS* Identify Unit Functions* For each unit function* Take each input function* Identify Equivalence class* Form Test cases* Form Test cases for boundary values* From Test cases for Error Guessing* Form Unit function v/s Test cases, Cross Reference Matrix* Find the coverage

  • 8/8/2019 Why Testing

    2/16

    Unit Testing:

    * The most 'micro' scale of testing to test particular functions or code modules.Typically done by the programmer and not by testers .* Unit - smallest testable piece of software.

    * A unit can be compiled/ assembled/ linked/ loaded; and put under a test harness.* Unit testing done to show that the unit does not satisfy the functional specificationand/ or its implemented structure does not match the intended design structure.

    Integration Testing:

    * Integration is a systematic approach to build the complete software structurespecified in the design from unit-tested modules. There are two ways integration

    performed. It is called Pre-test and Pro-test.* Pre-test: the testing performed in Module development area is called Pre-test. ThePre-test is required only if the development is done in module development area.

    Alpha testing:

    * Testing of an application when development is nearing completion minor designchanges may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

    Beta testing:

    * Testing when development and testing are essentially completed and final bugs and problems need to befound before final release. Typically done by end-users or others, not by programmers.

    System Testing:

    * A system is the big component.* System testing is aimed at revealing bugs that cannot be attributed to a componentas such, to inconsistencies between components or planned interactions betweencomponents.* Concern: issues, behaviors that can only be exposed by testing the entire integratedsystem (e.g., performance, security, recovery).

    Volume Testing:

    * The purpose of Volume Testing is to find weaknesses in the system with respect toits handling of large amounts of data during short time periods. For example, this kindof testing ensures that the system will process data across physical and logical

    boundaries such as across servers and across disk partitions on one server.

    Stress testing:

  • 8/8/2019 Why Testing

    3/16

    * This refers to testing system functionality while the system is under unusuallyheavy or peak load; it's similar to the validation testing mentioned previously but iscarried out in a "high-stress" environment. This requires that you make some

    predictions about expected load levels of your Web site.

    Usability testing:

    * Usability means that systems are easy and fast to learn, efficient to use, easy toremember, cause no operating errors and offer a high degree of satisfaction for theuser. Usability means bringing the usage perspective into focus, the side towards the

    user.Security testing:

    * If your site requires firewalls, encryption, user authentication, financial transactions,or access to databases with sensitive data, you may need to test these and also testyour site's overall protection against unauthorized internal or external access.

    Test Plan:

    * A Test Plan is a detailed project plan for testing, covering the scope of testing, the methodology to be used,the tasks to be performed, resources, schedules, risks, and dependencies. A Test Plan is developed prior to theimplementation of a project to provide a well defined and understood project roadmap.

    Test Specification:

    * A Test Specification defines exactly what tests will be performed and what their scope and objectives will be. A Test Specification is produced as the first step inimplementing a Test Plan, prior to the onset of manual testing and/or automated testsuite development. It provides a repeatable, comprehensive definition of a testingcampaign.

    Testing ProcedureWhat steps are needed to develop and run software tests?The following are some of the steps to consider:* Obtain requirements, functional design, and internal design specifications and other necessary documents.

    * Obtain budget and schedule requirements.Determine project-related personnel and their responsibilities,

    reporting requirements, required standards and processes (such as release processes, change processes, etc.)

    * Identify application's higher-risk aspects, set priorities, and determine scope andlimitations of tests.* Determine test approaches and methods - unit, integration, functional, system, load,

  • 8/8/2019 Why Testing

    4/16

    usability tests, etc.

    * Determine test environment requirements (hardware, software, communications, etc.)

    * Determine testware requirements (record/playback tools, coverage analyzers, test

    tracking, problem/bug tracking, etc.)

    * Determine test input data requirements

    * Identify tasks, those responsible for tasks, and labor requirements

    * Set schedule estimates, timelines, milestones

    * Determine input equivalence classes, boundary value analyses, error classes

    * Prepare test plan document and have needed reviews/approvals

    * Write test cases

    * Have needed reviews/inspections/approvals of test cases

    * Prepare test environment and testware, obtain needed user manuals/reference

    documents/configuration guides/installation guides, set up test tracking processes, set up

    logging and archiving processes, set up or obtain test input data

    * Obtain and install software releases

    * Perform tests

    * Evaluate and report results

    * Track problems/bugs and fixes

    * Retest as needed

    * Maintain and update test plans, test cases, test environment, and testware through life

  • 8/8/2019 Why Testing

    5/16

    cycleBug TrackingWhat's a 'test case'?

    * A test case is a document that describes an input, action, or event and an expected response, to determine if afeature of an application is working correctly. A test case should contain particulars such as test case identifier,test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

    * Note that the process of developing test cases can help find problems in therequirements or design of an application, since it requires completely thinking throughthe operation of the application. For this reason, it's useful to prepare test cases early inthe development cycle if possible.

    What should be done after a bug is found?

    * The bug needs to be communicated and assigned to developers that can fix it. After the

    problem is resolved, fixes should be re-tested, and determinations made regardingrequirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools'section for web resources with listings of such tools). The following are items to consider in the tracking process:

    * Complete information such that developers can understand the bug, get an idea of it's

    severity, and reproduce it if necessary.

    * Bug identifier (number, ID, etc.)

    * Current bug status (e.g., 'Released for Retest', 'New', etc.)

    * The application name or identifier and version

    * The function, module, feature, object, screen, etc. where the bug occurred

    * Environment specifics, system, platform, relevant hardware specifics

    * Test case name/number/identifier

    * One-line bug description

  • 8/8/2019 Why Testing

    6/16

    * Full bug description

    * Description of steps needed to reproduce the bug if not covered by a test case or if the

    developer doesn't have easy access to the test case/test script/test tool

    * Names and/or descriptions of file/data/messages/etc. used in test

    * File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be

    helpful in finding the cause of the problem

    * Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common

    * Was the bug reproducible?

    * Tester name

    * Test date

    * Bug reporting date

    * Name of developer/group/organization the problem is assigned to

    * Description of problem cause

    * Description of fix

    * Code section/file/module/class/method that was fixed

    * Date of fix

    * Application version that contains the fix

    * Tester responsible for retest

    * Retest date

    * Retest results

  • 8/8/2019 Why Testing

    7/16

    * Regression testing requirements

    * Tester responsible for regression tests

    * Regression testing results

    * A reporting or tracking process should enable notification of appropriate personnel at

    various stages. For instance, testers need to know when retesting is needed, developersneed to know when bugs are found and how to get the needed information, andreporting/summary capabilities are needed for managers.

    Why does software have bugs?* Miscommunication or no communication - as to specifics of what an application shouldor shouldn't do (the application's requirements).

    * Software complexity - the complexity of current software applications can be difficultto comprehend for anyone without experience in modern-day software development.Windows-type interfaces, client-server and distributed applications, datacommunications, enormous relational databases, and sheer size of applications have allcontributed to the exponential growth in software/system complexity. And the use of object-oriented techniques can complicate instead of simplify a project unless it is wellengineered.

    * Programming errors - programmers, like anyone else, can make mistakes.* Changing requirements - the customer may not understand the effects of changes, or

    there's job security if nobody else can understand it ('if it was hard to write, it should behard to read').* software development tools - visual tools, class libraries, compilers, scripting tools, etc.often introduce their own bugs or are poorly documented, resulting in added bugs.Software Testing FAQQ1. What is verification?

    * Q2. What is validation?

    * Q3. What is a walk-through?

    * Q4. What is an inspection?

  • 8/8/2019 Why Testing

    8/16

    * Q5. What is quality?

    * Q6. What is good code?

    * Q7. What is good design?

    * Q8. What is software life cycle?

    * Q9. Why are th ere so many software bugs?

    * Q10. How do you int roduce a new software QA process?

    * Q11. Give me five common problems during development.

    * Q12. Give me five solutions to problems during development.

    * Q13. Do automated testing tools make testing easier?

    * Q14. What makes a good test engineer?

    * Q15. What makes a good QA engineer?

    * Q16. What makes a good resume?

    * Q17. What makes a good QA/Test Manager?

    * Q18. What is the role of documentation in QA?

    * Q19. What about requirements?

    * Q20. What is a test plan?

    * Q21. What is a test case?

    * Q22. What should be done after a bug is found ?

    * Q23. What is configuration management?

    * Q24. What if the software is so buggy it can't be tested at all?

    * Q25. How do you know when t o stop testing?

    * Q26. What if there isn't enough time for thorough testing?

  • 8/8/2019 Why Testing

    9/16

    * Q27. What if the project isn't big enough for extensive testing?

    * Q28. What if requirements are changing continuously?

    * Q29. What about functionality that wasn't in the requirements?

    * Q30. How can Software QA processes be implemented?

    * Q31. What if an organization is growing so fast...?

    * Q32. How is testing affected by object-oriented designs?

    * Q33. Why do we test during the design phase?

    * Q34. What is software quality assurance?

    * Q35. What is quality assurance?

    * Q36. Process and procedures - why follow them?

    * Q37. Standards and templates - what is in a document?

    * Q38. What are the different levels of testing?

    * Q39. What is black box testing??

    * Q40. What is white box testing?

    * Q41. What is unit testing?

    * Q42. What is parallel/audit testing?

    * Q43. What is functional testing?

    * Q44. What is usability testing?

    * Q45. What is incremental integration testing?

    * Q46. What is integration testing?

    * Q47. What is system testin g?

  • 8/8/2019 Why Testing

    10/16

    * Q48. What is en d-to-end testing?

    * Q49. What is regression t esting?

    * Q50. What is sanity t esting?

    * Q51. What is performance testing?

    * Q52. What is load testing?

    * Q53. What is installation testing?

    * Q54. What is security/penetration testing?

    * Q55. What is recovery/error testing?

    * Q56. What is compatibility testing?

    * Q57. What is co mparison testing?

    * Q58. What is acceptance testing?

    * Q59. What is alpha testing?

    * Q60. What is beta testing?

    * Q61. What roles are standard on testing projects?

    * Q62. What is a Test/QA Team Lead?

    * Q63. What is a Test Engineer?

    * Q64. What is a Test Build Manager?

    * Q65. What is a System Administrator?

    * Q66. What is a Database Administrator?

    * Q67. What is a Technical Analyst?

    * Q68. What is a Test Configuration Manager?

  • 8/8/2019 Why Testing

    11/16

    * Q217. Tell me about TestDirector?

    * Q218. What is structural testing?

    * Q219. How can I improve my career in software testing...?

    * Q220. What is the difference between static and dynamic...?

    * Q221. What testing tools should I use?

    * Q222. Why should I use static testing techniques?

    Software Testing Frequently Asked Questions

    1. What is 'Software Quality Assurance'?

    2. What is 'Software Testing'?

    3. What are some recent major computer system failures

    caused by software bugs?4. Why is it often hard for management to get serious about

    quality assurance?

    5. Why does software have bugs?

    6. How can new Software QA processes be introduced in an

    existing organization?

    7. What is verification? Validation?

    8. What is a 'walkthrough'?

    9. What's an 'inspection'?

    10. What kinds of testing should be considered?11. What are 5 common problems in the software development

    process?12. What are 5 common solutions to software development

    problems?

    13. What is software 'quality'?

    14. What is 'good code'?

    15. What is 'good design'?

    16. What is SEI? CMM? CMMI? ISO? Will it help?

    17. What is the 'software life cycle'?

    18. Will automated testing tools make testing easier?

  • 8/8/2019 Why Testing

    12/16

    1.What is 'Software Quality Assurance'?

    Software QA involves the entire software development Process - monitoring andimproving the process, making sure that any agreed-upon standards and proceduresare followed, and ensuring that problems are found and dealt with. It is oriented to

    'prevention'. (See the Books section for a list of useful books on Software QualityAssurance.)

    2.What is 'Software Testing'?

    Testing involves operation of a system or application under controlled conditions andevaluating the results (eg, 'if the user is in interface A of the application while usinghardware B, and does C, then D should happen'). The controlled conditions shouldinclude both normal and abnormal conditions. Testing should intentionally attempt tomake things go wrong to determine if things happen when they shouldn't or thingsdon't happen when they should. It is oriented to 'detection'.

    Organizations vary considerably in how they assign responsibility for QA and testing.

    Sometimes they're the combined responsibility of one group or individual. Alsocommon are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It willdepend on what best fits an organization's size and business structure.

    3. What are some recent major computer system failures caused by software bugs?

    * Media reports in January of 2005 detailed severe problems with a $170 millionhigh-profile U.S. government IT systems project. Software testing was one of the fivemajor problem areas according to a report of the commission reviewing the project.Studies were under way to determine which, if any, portions of the project could besalvaged.

    * In July 2004 newspapers reported that a new government welfare managementsystem in Canada costing several hundred million dollars was unable to handle asimple benefits rate increase after being put into live operation. Reportedly the

    original contract allowed for only 6 weeks of acceptance testing and the system wasnever tested for its ability to handle a rate increase.

    * Millions of bank accounts were impacted by errors due to installation of inadequately tested software code in the transaction processing system of a major

    North American bank, according to mid-2004 news reports. Articles about theincident stated that it took two weeks to fix all the resulting errors, that additional

  • 8/8/2019 Why Testing

    13/16

    problems resulted when the incident drew a large number of e-mail phishing attacksagainst the bank's customers, and that the total cost of the incident could exceed $100million.

    * A bug in site management software utilized by companies with a significant

    percentage of worldwide web traffic was reported in May of 2004. The bug resultedin performance problems for many of the sites simultaneously and required disablingof the software until the bug was fixed.

    * According to news reports in April of 2004, a software bug was determined to be amajor contributor to the 2003 Northeast blackout, the worst power system failure in

    North American history. The failure involved loss of electrical power to 50 millioncustomers, forced shutdown of 100 power plants, and economic losses estimated at $6

    billion. The bug was reportedly in one utility company's vendor-supplied power monitoring and management system, which was unable to correctly handle and report

    on an unusual confluence of initially localized events. The error was found andcorrected after examining millions of lines of code.

    * In early 2004, news reports revealed the intentional use of a software bug as acounter-espionage tool. According to the report, in the early 1980's one nationsurreptitiously allowed a hostile nation's espionage service to steal a version of sophisticated industrial software that had intentionally-added flaws. This eventually1

    4.Why is it often hard for management to get serious about quality assurance?

    * Solving problems is a high-visibility process; preventing problems is low-visibility.

    This is illustrated by an old parable: In ancient China there was a family of healers,one of whom was known throughout the land and employed as a physician to a greatlord.

    5.Why does software have bugs?* Miscommunication or no communication - as to specifics of what an applicationshould or shouldn't do (the application's requirements).

    * Software complexity - the complexity of current software applications can bedifficult to comprehend for anyone without experience in modern-day softwaredevelopment. Multi-tiered applications, client-server and distributed applications,

    data communications, enormous relational databases, and sheer size of applicationshave all contributed to the exponential growth in software/system complexity.

    * Programming errors - programmers, like anyone else, can make mistakes.

    * Changing requirements (whether documented or undocumented) - the end-user maynot understand the effects of changes, or may understand and request them anyway -redesign, rescheduling of engineers, effects on other projects, work already completed

  • 8/8/2019 Why Testing

    14/16

    that may have to be redone or thrown out, hardware requirements that may beaffected, etc. If there are many minor changes or any major changes, known andunknown dependencies among parts of the project are likely to interact and cause

    problems, and the complexity of coordinating changes may result in errors.Enthusiasm of engineering staff may be affected. In some fast-changing business

    environments, continuously modified requirements may be a fact of life. In this case,management must understand the resulting risks, and QA and test engineers mustadapt and plan for continuous extensive testing to keep the inevitable bugs fromrunning out of control - see 'What can be done if requirements are changingcontinuously?' in Part 2 of the FAQ. Also see information about 'agile' approachessuch as XP, also in Part 2 of the FAQ.

    * Time pressures - scheduling of software projects is difficult at best, often requiringa lot of guesswork. When deadlines loom and the crunch comes, mistakes will be

    made.

    * egos - people prefer to say things like:

    * * 'no problem'

    * * 'piece of cake'

    * * 'I can whip that out in a few hours'

    * * 'it should be easy to update that old code'

    * instead of:

    * * 'that adds a lot of complexity and we could end up making a lot of mistakes'

    * * 'we have no idea if we can do that; we'll wing it'

    * * 'I can't estimate how long it will take, until I take a close look at it'

    * * 'we can't figure out what that old spaghetti code did in the first place'

    If there are too many unrealistic 'no problem's', the result is bugs.

    * Poorly documented code - it's tough to maintain and modify code that is badly

  • 8/8/2019 Why Testing

    15/16

    written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear,understandable, maintainable code. In fact, it's usually the opposite: they get pointsmostly for quickly turning out code, and there's job security if nobody else canunderstand it ('if it was hard to write, it should be hard to read').

    * Software development tools - visual tools, class libraries, compilers, scripting tools,etc. often introduce their own bugs or are poorly documented, resulting in added

    bugs.

    6.How can new Software QA processes be introduced in an existingorganization?

    * A lot depends on the size of the organization and the risks involved. For largeorganizations with high-risk (in terms of lives or property) projects, seriousmanagement buy-in is required and a formalized QA process is necessary.

    * Where the risk is lower, management and organizational buy-in and QAimplementation may be a slower, step-at-a-time process. QA processes should be

    balanced with productivity so as to keep bureaucracy from getting out of hand.

    * For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customersand projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequatecommunications among customers, managers, developers, and testers.

    * The most value for effort will often be in (a) requirements management processes,

    with a goal of clear, complete, testable requirement specifications embodied inrequirements or design documentation, or in 'agile'-type environments extensive

    continuous coordination with end-users, (b) design inspections and code inspections,and (c) post-mortems/retrospectives.7.What is verification? validation?

    * Verification typically involves reviews and meetings to evaluate documents, plans,code, requirements, and specifications. This can be done with checklists, issues lists,walkthroughs, and inspection meetings. Validation typically involves actual testingand takes place after verifications are completed. The term 'IV & V' refers toIndependent Verification and Validation.

    8.What is a 'walkthrough'?* A 'walkthrough' is an informal meeting for evaluation or informational purposes.Little or no preparation is usually required.9.What's an 'inspection'?

    * An inspection is more formalized than a 'walkthrough', typically with 3-8 peopleincluding a moderator, reader, and a recorder to take notes. The subject of the

  • 8/8/2019 Why Testing

    16/16

    inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendeesshould prepare for this type of meeting by reading thru the document; most problemswill be found during this preparation. The result of the inspection meeting should be awritten report.

    10.What kinds of testing should be considered?* Black box testing - not based on any knowledge of internal design or code. Tests are

    based on requirements and functionality.* White box testing - based on knowledge of the internal logic of an application'scode. Tests are based on coverage of code statements, branches, paths, conditions.

    * Unit testing - the most 'micro' scale of testing; to test particular functions or codemodules. Typically done by the programmer and not by testers, as it requires detailedknowledge of the internal program design and code. Not always easily done unlessthe application has a well-designed architecture with tight code; may require

    developing test driver modules or test harnesses.

    * Incremental integration testing - continuous testing of an application as newfunctionality is added; requires that various aspects of an application's functionality

    be independent enough to work separately before all parts of the program arecompleted, or that test drivers be developed as needed; done by programmers or bytesters.