25
TEP Consulting & Solutions Confidential Hand Out Confidential Page 1 Software Development Life Cycle (SDLC)!! Water Fall Model 1. User Requirements: The user group identifies and documents the need for an IT solution including its intended use and high level user requirements. 2. Requirements Definition & Planning: COTS products are evaluated and gap analyses produced. Vendor evaluations and contracts are completed through appropriate IT supplier evaluation process and ARC contracting process. Feature-based requirements are written for COTS product evaluations. More detailed software requirements are written for software to be developed. If necessary, a Hazard Analysis is performed. The Risk Management Plan is reviewed. 3. Design: Software Requirements Specification requirements are analyzed to arrive at design for the proposed system. The design includes detailed functional and technical that are sufficient to support development and testing in later phases. The Development Plan, including test planning (unit and integration) for the Development Phase is completed. 4. Development: Code is written and software components are assembled to create the system. Unit and integration test Integration Test Cases & Procedures are developed and executed.

SDLC Testing Methodology

Embed Size (px)

DESCRIPTION

Keep touch to learn a new everyday....

Citation preview

  • TEP Consulting & Solutions Confidential

    Hand Out Confidential Page 1

    Software Development Life Cycle (SDLC)!!

    Water Fall Model

    1. User Requirements: The user group identifies and documents the need for an IT solution including its intended use and high level user requirements.

    2. Requirements Definition & Planning: COTS products are evaluated and gap analyses produced. Vendor evaluations and contracts are completed through appropriate IT supplier evaluation process and ARC contracting process. Feature-based requirements are written for COTS product evaluations. More detailed software requirements are written for software to be developed. If necessary, a Hazard Analysis is performed. The Risk Management Plan is reviewed. 3. Design: Software Requirements Specification requirements are analyzed to arrive at design for the proposed system. The design includes detailed functional and technical that are sufficient to support development and testing in later phases. The Development Plan, including test planning (unit and integration) for the Development Phase is completed.

    4. Development: Code is written and software components are assembled to create the system. Unit and integration test Integration Test Cases & Procedures are developed and executed.

  • TEP Consulting & Solutions Confidential

    2

    5. System Testing: The completed system is independently tested to verify that system meets the Software Requirements Specification requirements. Beta testing is performed if applicable. 6. Configuration Design: Product configurations are determined based on the user requirements and user input. Conference room pilots and prototyping are frequently used tools in this phase. 7. Acceptance: Users perform acceptance activities that confirm that the system, used in conjunction with operational procedures, meets the documented user requirements. These activities range from full site validation for medical devices to less formal user acceptance testing for an unregulated system. A separate activity that may occur in this phase is integration of the completed system with other interfacing systems, including End to End or Multiple Systems Integration Testing. 8. Deployment: Sites that will receive the system are prepared. Activities may include infrastructure installation, end user training, modifications of operational procedures, preparation of data for migration, etc. 9. Cutover: During this phase, a site discontinues use of a predecessor system, and activities required prior to initiating use of the new system (such as data migration) are performed. 10. Maintenance: Based on information from the field and/or sponsor, problem reports and enhancement requests are analyzed that may result in ad hoc query development, production data changes, or the initiation of urgent releases. Release planning is performed that may result in the initiation of another trip through the lifecycle.Product retirement also occurs in this phase.

    Project Reviews

    The CMMS 3.2.0 release builds upon an application that is in production use and is in the maintenance phase of its software development life cycle (SDLC). The following reviews will be performed to support this plan.

    1. User Requirements Review (URR): This review will be held to obtain formal agreement from the sponsor that the user requirements documented in the User Requirements Specifications (URS) are correct and complete, and obtain management authorization to begin the design phase.

    2. Software Requirements Review (SRR):

    This review will be held to obtain formal agreement between the sponsor and developers that the software requirements, documented in the Software Requirements Specification (SRS), are correct and complete, to agree that the report specifications are complete and have been delivered to IT by the sponsor and to obtain management authorization to begin the Design Phase.

    3. Critical Design Review (CDR):

    At the end of the Design Phase this review is held to ensure compatibility exists between requirements and design, the detailed specifications are complete and that the design is complete, correct, and operationally feasible.

  • TEP Consulting & Solutions Confidential

    3

    4. Test Readiness Review (TRR)

    The purpose of this review is to ensure that the unit test results are satisfactory, critical areas of software must be free of errors before system tests can be conducted, system test protocols are ready for execution and provide adequate testing and verification of the software, hardware, and procedures, and that the system test environment is properly defined and ready for system testing to commence.

    5. Acceptance Readiness Review (ARR)

    At the end of the System Testing Phase, the Project Team conducts the ARR. The objective of the ARR is to obtain formal agreement among the reviewers that the release is ready for full site validation or other post-development testing. The system test analysis reports must be approved and the specification and design documents must be baselined.

    6. ARR participants ensure that:

    The system test results are satisfactory. Known problems remaining in the release must not present a hazard and must be acceptable to the sponsor.

    The full site validation is ready for execution and provides adequate validation that the system meets the user requirements when used in accordance with the documented procedures

    7. Production Readiness Review (PRR) The objective of the PRR is to ensure that all development activities have been successfully accomplished and documented before a release is approved for production. Known problems remaining in the release must not present a hazard and must be acceptable to the sponsor. The test analysis reports must be approved and the specification and design documents must be baselined. The release documentation must be complete and approved. The goals of this review include the confirmation that the sponsor understands the status of the system and finds it acceptable to proceed with delivery.

    8. Live Readiness Review (LRR)

    The goal of the LRR is for the Project Team, the receiving organization and IT Quality assurance group to recommend to senior management that the site is either prepared to go into production or that a delay is required. The LRR check list, which documents the completeness of the site

    cutover activities, is used by the reviewers to make this recommendation. The projects implementation plan will provide the specific LRR Checklist to be used. The production implementation site will be the current production site, which is a computer operations center under BIS access control. 9. Members in the Integrated Product Team (IPT)? 1. Project Management (PM), 2. Requirements Analysis (RA), 3. Software Engineering (SE), and 4. Test Engineering (TE). It is also directed to staff who support systems development listed below: IT Configuration Management (ITCM), Staff who assist in deployment of IT systems and who coordinate hardware and infrastructure support for IT projects, Designated IT RC staff, and

  • TEP Consulting & Solutions Confidential

    4

    Designated IT QA staff. 10. Integrated Product Team[ IPT ]: IT system developers from RA, SE, and TE form an Integrated Product Team (IPT) for each IT system. Members of this team analyze potential problem cases and open an SPR when a defect is confirmed. RA typically takes the lead in this activity with SE and TE supporting. 11. Enhancement Requests (ER): An Enhancement Request (ER) is a request to change the functionality of the system. The request would outline a change to an existing system requirement or add a new requirement to an existing system. Sources of ERs are Product Users and User Groups, various Sponsors, IT personnel, and changes in regulatory requirements and industry. ERs will be entered into the Configuration Management Tool Integration (CMTI) database. The responsible product manager will analyze each ER and present to the sponsor and IPT who determine the status and sponsor priority. ERs approved for a release will be associated with a Change Request (CR) and the appropriate CMTI records will be updated to reflect this association. The CR is the official record of change and will proceed through the development life cycle for formal release. 12. Defect Analysis and Tracking: System Problem Reports (SPRs) are systematically analyzed and tracked for every system that IT maintains. During production, users may discover issues with the operation of a system (i.e., instances where the system does not seem to perform as it was intended to do.) These are reported to User Support and evaluated by the IPT. IT staff may also discover issues, either during development, testing, or during production use of any system. When the IPT confirms that a defect does indeed exist, a SPR is generated. 13. Product Review Board: IT senior management forms a PRB that meets regularly to review all new SPRs. Members of the IPTs present the SPRs, the analysis of risk and business impact, assigned priority, and recommended operational activities (workarounds, record reviews), if any. Business sponsors participate in this process, either through attendance at the PRB meetings, or through representation by RA. The outcome of the PRB review for each SPR is documented in the SPR record. SPR priorities, once confirmed by the PRB, may only be altered by a subsequent PRB review.Meeting minutes recording decisions and action items from the PRB are prepared and maintained by ITCM. 14. Ad hoc queries and data base changes: Use of a system may lead to the need for

    development staff to produce ad-hoc queries of production data, or to make changes to that data. IT uses S005, Software Utility Development, to manage these activities. 15. Urgent Releases: Use of a system may lead to the need to quickly make minor changes to a production system. IT uses S006, Implementing Urgent Releases, to manage these activities. 16. Control Point: A function or area in a manufacturing process or procedure that failure or loss of control may have an adverse effect on the quality of the finished product, donor suitability, or may result in a health risk. 17. Cosmetic: Issues that have no adverse impact on functionality such as typographical and grammatical errors or other editorial items.

  • TEP Consulting & Solutions Confidential

    5

    18. Critical Functionality: Functionality in the computer software where failure or loss of control may have an adverse effect on the SQuIPP of the biological product and potentially cause harm to a patient, donor, or staff member. 19. Defect Priority Number: A rating assigned to a computer system defect, based on defined criteria, that is used to determine required corrective actions, i.e., mitigation, sustainable alternative process, software release planning, and evaluations for further action. 20. Detectability: The ability to detect the issue when it occurs in the operational environment, before any potential adverse event can occur and is determined by analyzing the software behavior and input from subject matter experts. 21. Essential Functionality: Functionality in the computer system where failure or loss of control may have an impact on business practices that do not involve control points and would not result in any harm to a patient, donor, or staff member. 22. Likelihood of Occurrence: The frequency of the defect, how likely it is to occur in the operation environment and is determined by analyzing the software behavior, reports from users, and input from subject matter experts. 23. Medical Device: Software applications intended for use in the manufacture of blood and blood components or for the maintenance of data that blood establishment staff uses in making decisions regarding the suitability of donors and the release of blood or blood components for transfusion or further manufacture. 24. Regulated Systems: Software applications used within ARC that are not medical devices but that are used to support regulated activities in the manufacturing or distribution process. 25. Regulatory Risk: A regulatory risk occurs when the system fails to meet a regulatory requirement, compliance related functionality or procedural requirements. In addition, system unavailability or the inability to collect, process, or release components constitutes a regulatory risk. 26. Severity: Severity categorizes the health risk to donor, patient, staff, or the impact to business operations.

    27. Master Test Plan: An optional planning tool used by TE to aid in specifying the complete scope, approach, resources, and schedule of multiple testing activities required throughout all phases of the software development life cycle. It identifies test items, the features to be tested, the testing tasks, responsibilities, required resources, and any risks requiring contingency planning.

    28. System Test Plan: Over all strategic plans for the software test. Documentation specifying the complete scope, approach, resources, and schedule of intended testing activities required for the system test phase of the software development life cycle. It identifies test items, the features to be tested, the testing tasks, responsibilities, required resources, and any risks requiring contingency planning.

    29. Test Analysis Report: A document describing the conduct and results of the testing carried out for a system or system component.

  • TEP Consulting & Solutions Confidential

    6

    30. Test Case: Documentation specifying inputs, predicted results, and a set of execution conditions for a test item. 31. Test Procedure: A formal document developed from a test plan that presents detailed instructions for the setup, operation, and evaluation of the results for each defined test. 32. Validation of Specialty Devices: Establishing documented evidence that a specialty device meets the intended business use. 33. Data sets: Data sets intended for testing may be composed of snapshots of production data, or may be constructed of specific data entries designed to match particular test procedures. 34. Verification queries and utilities: Data verification queries or utilities may be constructed by TE to independently verify the contents of a database, so that the success of application functions on that database may be judged. Data sets and data verification queries and utilities are controlled as configuration

    items and form part of the system test baseline.

    35. Walk-through: A 'walkthrough' is an informal meeting for evaluation or informational purposes.

    Little or no preparation is usually required. For all requirements designated as critical, TE conducts a walk-through of the draft test case. Participants at this walk-through should include Requirements Analyst Group, SE, and IT quality assurance staff, as well as TE personnel not involved with the development of the test case. The TE Lead for the project leads the walk-through. Participants review the test approach described in the draft test case to ensure that the critical aspects of the requirement are thoroughly tested, that there is adequate test coverage to verify the requirement, and that the testing exercises the software in ways that are sensible given the software design and intended use. The purpose of the walk-through is to ensure that there is a valid and complete testing approach adequate to verify the requirement(s). The results of the walk-through are recorded on IT 006, Form 1, Peer Review Form, retained, and eventually filed with the system test results. At the discretion of the TE Lead, walk-through may be conducted for noncritical test cases as well. 36. Peer review: As the drafts are completed, TE conducts an internal review of the test cases and procedures. During this review, action items from the walk-through (if any) are verified, and the SRS and SDD are reviewed with the procedures to ensure that the test is complete and will verify that software requirements are met. The peer reviewer may be the TE Lead or other TE staff not responsible for the development of the case and procedures being reviewed. 37. Obtain quality approval: Once peer reviewed, TE coordinates with IT quality assurance for review and approval of the test cases and procedures. 38. What is Verification? Validation?

    Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements,

    and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings.

    Validation typically involves actual testing and takes place after verifications are completed. The term 'IV

    & V' refers to Independent Verification and Validation.

  • TEP Consulting & Solutions Confidential

    7

    39. What's an 'inspection'? An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator,

    reader, and a recorder to take notes. The subject of the inspection is typically a document such as a

    requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix

    anything. Attendees should prepare for this type of meeting by reading thru the document; most problems

    will be found during this preparation. The result of the inspection meeting should be a written report.

    Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective

    methods of ensuring quality. Employees who are most skilled at inspections are like the 'eldest brother. Their skill may have low visibility but they are extremely valuable to any software development

    organization, since bug prevention is far more cost-effective than bug detection.

  • TEP Consulting & Solutions Confidential

    8

    V Type Testing Model.

    As you get involved in the development of a new system a vast number of software tests appear to be required to prove the system. While they are consistent in all having the word "test" in them, their relative importance to each other is not clear. This advice article gives an outline of the various types of software testing. The main software testing types are:

    Component. Interface. System. Acceptance. Release.

    To put these types of software testing in context requires an outline of the development process.

    Development Process

    The development process for a system is traditionally as a Waterfall Model where each step follows the next, as if in a waterfall. This shows how various products produced at each step are used in the process following it. It does not imply that any of the steps in a process have to be completed, before the next step starts, or that prior steps will not have to be revisited later in development. It is just a useful model for seeing how each step works with each of the others.

  • TEP Consulting & Solutions Confidential

    9

    Business Case

    The first step in development is a business investigation followed by a "Business Case" produced by the customer for a system. This outlines a new system, or change to an existing system, which it is anticipated will deliver business benefits, and outlines the costs expected when developing and running the system.

    Requirements

    The next broad step is to define a set of "Requirements", which is a statement by the customer of what the system shall achieve in order to meet the need. These involve both functional and non-functional requirements. Further details are in the requirements article.

    System Specification

    "Requirements" are then passed to developers, who produce a "System Specification". This changes the focus from what the system shall achieve to how it will achieve it by defining it in computer terms, taking into account both functional and non-functional requirements.

    System Design

    Other developers produce a "System Design" from the "System Specification". This takes the features required and maps them to various components, and defines the relationships between these components. The whole design should result in a detailed system design that will achieve what is required by the "System Specification".

    Component Design

    Each component then has a "Component Design", which describes in detail exactly how it will perform its piece of processing.

    Component Construction

    Finally each component is built, and then is ready for the test process.

    Types of Testing

    The level of test is the primary focus of a system and derives from the way a software system is designed and built up. Conventionally this is known as the "V" model, which maps the types of test to each stage of development.

    Component Testing

    Starting from the bottom the first test level is "Component Testing", sometimes called Unit Testing. It involves checking that each feature specified in the "Component Design" has been implemented in the component. In theory an independent tester should do this, but in practise the developer usually does it, as they are the only people who understand how a component works. The problem with a component is that it performs only a small part of the functionality of a system, and it relies on co-operating with other parts of

  • TEP Consulting & Solutions Confidential

    10

    the system, which may not have been built yet. To overcome this, the developer either builds, or uses special software to trick the component into believing it is working in a fully functional system.

    Interface Testing

    As the components are constructed and tested they are then linked together to check if they work with each other. It is a fact that two components that have passed all their tests, when connected to each other produce one new component full of faults. These tests can be done by specialists, or by the developers. Interface Testing is not focussed on what the components are doing but on how they communicate with each other, as specified in the "System Design". The "System Design" defines relationships between components, and this involves stating:

    What a component can expect from another component in terms of services. How these services will be asked for. How they will be given. How to handle non-standard conditions, i.e. errors.

    Tests are constructed to deal with each of these. The tests are organised to check all the interfaces, until all the components have been built and interfaced to each other producing the whole system.

    System Testing

    Once the entire system has been built then it has to be tested against the "System Specification" to check if it delivers the features required. It is still developer focussed, although specialist developers known as systems testers are normally employed to do it. In essence System Testing is not about checking the individual parts of the design, but about checking the system as a whole. In effect it is one giant component. System testing can involve a number of specialist types of test to see if all the functional and non-functional requirements have been met. In addition to functional requirements these may include the following types of testing for the non-functional requirements:

    Performance - Are the performance criteria met? Volume - Can large volumes of information be handled? Stress - Can peak volumes of information be handled? Documentation - Is the documentation usable for the system? Robustness - Does the system remain stable under adverse circumstances?

    There are many others, the needs for which are dictated by how the system is supposed to perform.

    Acceptance Testing

    Acceptance Testing checks the system against the "Requirements". It is similar to systems testing in that the whole system is checked but the important difference is the change in focus:

    Systems Testing checks that the system that was specified has been delivered.

  • TEP Consulting & Solutions Confidential

    11

    Acceptance Testing checks that the system delivers what was requested.

    The customer, and not the developer should always do acceptance testing. The customer knows what is required from the system to achieve value in the business and is the only person qualified to make that judgement. To help them courses and training are available. The forms of the tests may follow those in system testing, but at all times they are informed by the business needs.

    Release Testing

    Even if a system meets all its requirements, there is still a case to be answered that it will benefit the business. The linking of "Business Case" to Release Testing is looser than the others, but is still important. Release Testing is about seeing if the new or changed system will work in the existing business environment. Mainly this means the technical environment, and checks concerns such as:

    Does it affect any other systems running on the hardware? Is it compatible with other systems? Does it have acceptable performance under load?

    These tests are usually run the by the computer operations team in a business. The answers to their questions could have significant a financial impact if new computer hardware should be required, and adversely affect the "Business Case". It would appear obvious that the operations team should be involved right from the start of a project to give their opinion of the impact a new system may have. They could then make sure the "Business Case" is relatively sound, at least from the capital expenditure, and ongoing running costs aspects. However in practise many operations teams only find out about a project just weeks before it is supposed to go live, which can result in major problems.

    Regression Tests

    With modern systems one person's system, becomes somebody else's component. It follows that all the above types of testing could be repeated at many levels in order to deliver the final value to the business. In fact every time a system is altered.

  • TEP Consulting & Solutions Confidential

    12

    Design Throughout The Agile SDLC.

    Figure 1 depicts the generic agile software development lifecycle. For the sake of discussion, the important thing to note is that there is no design phase, nor a requirements phase for that matter, which traditionalists are familiar with. Agile developers will do some high-level architectural modeling during Iteration 0, also known as the warm-up phase, and detailed design during development iterations and even during the end game (if needed).

    Figure 1. The Agile SDLC.

    Figure 2 depicts the Agile Model Driven Development (AMDD) lifecycle, the focus of which is how modeling fits into the overall agile software development lifecycle. Early in the project you need to have at least a general idea of how you're going to build the system. Is it a mainframe COBOL application? A .Net application? J2EE? Something else? During Iteration 0 the developers on the project will get together in a room, often around a whiteboard, discuss and then sketch out a potential architecture for the system. This architecture will likely evolve over time, it will not be very detailed yet (it just needs to be good enough for now), and very little documentation (if any) needs to be written. The goal is to identify an architectural strategy, not write mounds of documentation.

  • TEP Consulting & Solutions Confidential

    13

    Figure 2. The AMDD lifecycle.

    When a developer has a new requirement to implement they ask themselves if they understand what is being asked for. If not, then they do some just-in-time (JIT) "model storming" to identify a strategy for implementing the requirement. This model storming is typically done at the beginning of an iteration during the detailed planning effort for that iteration, or sometime during the iteration if they realize that they need to explore the requirement further. Part of this modeling effort will be analysis of the requirement as well as design of the solution, something that will typically occur on the order of minutes. In Extreme Programming (XP) they refer to this as a "quick design session".

    If the team is taking a Test-Driven Development (TDD) approach the detailed design is effectively specified as developer tests, not as detailed models. Because you write a test before you write enough production code to fulfill that test you in effect think through the design of that production code as you write the test. Instead of creating static design documentation, which is bound to become out of date, you instead write an executable specification which developers are motivated to keep up to date because it actually provides value to them. This strategy is an example of the AM practice of single sourcing information, where information is captured once and used for multiple purposes. In this case for both detailed specification and for confirmatory testing.

    When you stop and think about it, particularly in respect to Figure 2, TDD is a bit of a misnomer. Although your developer tests are "driving" the design of your code, your agile models are driving your overall thinking.

  • TEP Consulting & Solutions Confidential

    14

    Software Test Methodology!! 1. What is Functionality Testing? Answer: Determine all modules and its properties are working properly. To perform this test we have to test each and every modules, every box, buttons and their properties are behaving as desired by the specifications. 2. What is Regression Testing? Answer: Whenever a bug is reported to the developer, that bug is fixed. After the fixation the software send back to the testing region to retest the whole software. This is done to ensure that the fixation bug has not caused any new problems to any part of the software. Automated testing tools can be especially useful for this type of testing

    3. What is Integration Testing? Answer: To perform integration testing we have to determine that every single peace of an application such as back-end data, front-end operating system, hardware, software, networking connectivity and all sub systems are interacting to each other as per requirement. 4. What is Performance Testing? Answer: Performance testing verifies the system response time or transactions per second during normal/peak and extreme system loading time. The performance of system depends on how well its scales up and how fast it responds to large volumes of transactions. Performance testing verifies that the system will operate in production and will provide a consistent and predictable level of service. This includes measuring resource utilization, response times, failover conditions, and other operational considerations. Types of performance tests include performance benchmark tests and rehost tests. 5. What is Load Testing? Answer: Load testing is an application under heavy loads, such as a web site under a range of loads to determine at what point the system response time degrades or fail. 6. What is Volume Testing? Answer: How long does it takes to load how large amount of data into the database thats called volume testing. 7. What is Stress Testing? Answer: Stress testing verifies software functionality in an environment where the system resources are started. It is use for determine that a function, or group of function, behaves properly when performed repeatedly over an extended period of time. 8. What is Transaction Testing? Answer: To perform this testing we have to make sure that the inserted data into the front-end, has been transected to the database properly. To do this testing we have to go through the SQL quires to check the different tables for database. 9. What is System Testing? Answer: We perform this testing after all the modules has been developed and connected to each other. In this point, we have to test the whole functionality A to Z that means we have to go through across the software to make sure all the systems and subsystems to the application is working properly. 10. What is Black-Box Testing?

  • TEP Consulting & Solutions Confidential

    15

    Answer: Without knowing any program design and source code, to test the software, thats called Black-Box Testing. Testers basically ignore the internal design to perform this test. 11. What is White-Box Testing? Answer: Detailed knowledge of all the program design and source code, to test the software thats called White-Box Testing. Testers have to have the strong knowledge of internal design to perform this test. Usually developer does this test. 12. What is Unit Testing? Answer: Unit testing to verify the smallest piece of a program to determine if the actual structure is correct and if the code defines operates correctly. To perform this testing it requires detailed knowledge of the internal program design and code. Generally unite testing done by the developments engineers. 13. What is Security Testing? Answer: To determine how well the system protects against unauthorized internal or external access. To do this testing we have to check, user id and password is working as per requirement to login to the system. 14. What is Positive Testing? Answer: Any test done by the valid data thats called positive test. 15. What is Negative Testing? Answer: Any test perform by the invalid data thats called negative test 16. What is Alpha Testing? Answer: When an application development is close to complete but still has to make some minor design change due to testing result thats called alpha testing. Usually done by end-users or others, not by developers or testers. 17. What is Beta Testing? Answer: When development and testing are essentially completed and final bugs or problem need to be found before the final release that is beta testing. Typically done by end-users or others, not by programmers or testers.

    The purpose of Beta Testing is to provide initial user-level testing in a user environment during the system testing phase, and prior to Acceptance Readiness Review (ARR). Beta testing should confirm the usability of programs and scripts and the consistency/compatibility among the various system modules based upon intended use. Beta testing requires end-user participation. 18. What is UAT? Answer: User acceptance testing is the final testing phase, based on specifications of the end-user or customer before the software go for production. 19. What is recovery testing? Answer: Recovery testing is how well the system recovers from crashes and hard ware failures or other major problems. 20. What is Usability Testing?

  • TEP Consulting & Solutions Confidential

    16

    Answer: How friendly or simple way the software can use by the end user or customer. User interviews, surveys, video recording of user seasons and other techniques can be used to perform this testing. Programmers and Testers are not usually appropriate for Usability Testing. 21. What is Compatibility Testing? Answer: Compatibility testing verifies that the software application to be installed and executed in the production environment does not prevent or preclude the execution of other computer programs that co-exist within the installed software environment. These may include but are not limited to:

    Service patches for AIX operating systems Microsoft security patches Client software Application server changes Relational database Application updates Other non-application changes.

    Compatibility testing should always be performed in a production-like test environment. After installation, co-existing computer programs are executed to determine their ability to correctly perform start-up and exit operations and to correctly connect and disconnect from the database product. Compatibility testing will not test all functionality of the software but will check critical requirements to ensure that the software has not been impacted by the environment change. 22. What is User Interface Testing? Answer: user interface Testing is to check a users interaction with the software. The goal of UI testing is to ensure that the objects within the UI function as expected and conform to industry standards. Interface testing verifies that the system produces the output or accepts the input specified in the interface requirements specification or design document. Interface testing may employ files or input that are simulated to come from a generating system, per the interface specification 23. What is End-to-End Testing?

    Answer: A tester involves testing of a complete application environment in a situation that mimics real-world use such as interacting with a database, using network communications, or interacting with other hardware, applications or systems. End-to-end testing exercises both systems involved in the interfacedata is generated by the output system, received by the input system, and the transaction is analyzed for correct behavior. 24. What Smoke testing? Answer: When a build is received, a smoke test is run to ascertain if the build is stable and it can be considered for further testing. Smoke testing can be done for testing the stability of any interim build. Smoke testing can be executed for platform qualification tests.

    25. What Sanity testing?

  • TEP Consulting & Solutions Confidential

    17

    Answer: Once a new build is obtained with minor revisions, instead of doing a through regression, a sanity is performed so as to ascertain the build has indeed rectified the issues and no further issue has been introduced by the fixes. Its generally a subset of regression testing and a group of test cases are executed that are related with the changes made to the app. Generally, when multiple cycles of testing are executed, sanity testing may be done during the later cycles after through regression cycles. 26. What Rehost testing? At times, IT projects involve rehosting of a procured or previously developed computer system from one computer environment to another. A typical requirement of rehosting is that no functionality will be changed in such a rehosting effort. To test this requirement, TE will selectively test a subset of the functionality to demonstrate that the system functions the same in the new environment as in the old. SPRs will be generated if discrepancies are found. If a significant number or type of discrepancy is found during rehost testing, the more comprehensive regression test approach may be indicated. A TAR describing the rehost testing will be produced to document what functionality was tested, the results of the tests, and the decision if additional testing is required.

  • TEP Consulting & Solutions Confidential

    18

    1. What is test plan?

    Answer: Test plan is the overall strategic (plan) or detailed document that describe the enter test process. A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it.

    2. What is the test plan overview document? Answer: Test Plan overview documents are: User requirement specifications Functional specifications Design specifications Application under test.

    3. What contains to prepare a standard test plan? Answer: The following items are content to create a standard test plan:

    Introduction Project description Purpose of document Functional overview Date and mile stone Scope and Objectives Recourses Hardware requirement Software requirement Staff requirement Risk and responsibility analysts Test progress matrix Types of testing to be performed Test case document Validation process System released Bug tracking and reporting Entrance criteria and exit criteria Positive and Negative Testing criteria Remarks

    4. What is test case? Answer: Test case is an individual test that corresponds to a test purpose, of step-by-step description to test the specific scenario which in turn maps back to the assertion(s), and finally the spec.

    5. What do you need to write test cases? Answer: To write the test cases we need following documents:

    User requirement

    Business rules

    Functional specification

    Test plan

    Application under test

  • TEP Consulting & Solutions Confidential

    19

    6. Describe the format of your Test Cases.

    Answer:

    TEST CASES

    Description:

    Prerequisite:

    Req. # Test Case # Condition Expected Result

    Term Definition Configurations: Options/switches for a product provided by the software manufacturer. These generally appear as menu picks, table population, or similar constructs. Customizations: Extensions to a COTS product performed with facilities provided by the software manufacturer, such as scripting languages, screen building tools, report writers, or database schema modification tools. Deliverable: A required output of the project life cycle, usually specified in a Project Management Plan (PMP), SOP or other management source, and requiring version/change control. Product Life Cycle (PLC) : The complete history of a product through its concept, definition, production, operation (maintenance), and retirement phases. Project Management Life Cycle (PMLC):The four sequential major time periods through which any project passes, namely: Concept, Definition, Execution, and Closing. Software Development Life Cycle (SDLC): A subset of the PLC including definition (requirements and design), and production (configuration, development, system testing) phases. Work Product (WP): An output of the project life cycle that is used directly or indirectly by the project to document an activity or provide a foundation from which other tasks or activities are completed. It is

  • TEP Consulting & Solutions Confidential

    20

    intended to be provided as an artifact for the activity, available for review, and not a document that is anticipated to be changed and revised or that is under formal change control.

    Validation Strategies The purpose of this section is to describe the validation strategies that will be completed by the user organization prior to releasing the project application upgrade to the user community for production use. These deliverables will constitute the User Acceptance Plan that all requirements have been met.

    Full Site Validation (FSV) The Sponsor will develop an acceptance plan which will be the projects IT S007 deliverable Full Site Validation Plan (FSVP). It will be executed by the user organization as the Users Acceptance Plan that all requirements have been met. Execution of the FSVP will occur after a successful ARR. A successful result to the FSVP will be an input to the PRR, which then authorizes readiness to install the CMMS 3.2.0 application in the production environment. Local Site Validation (LSV) (also known as the Sponsors Production Verification) The sponsor will develop a qualification plan and test protocols (a Production Verification Plan) which will function as the projects IT S007 Local Site Validation Plan (LSVP). It will qualify the production environment as being ready for use as intended. Execution of the LSVP will occur after a success PRR. A successful result of the LSVP will be an input to the LRR for release of the system to production users. There is only one application site and therefore this LSVP will be executed only once, after installation of the release to the production environment has occurred and the PRR has successfully been approved. How can it be known when to stop testing?

    This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:

    Deadlines (release deadlines, testing deadlines, etc.) Test cases completed with certain percentage passed Test budget depleted Coverage of code/functionality/requirements reaches a specified point

    Bug rate falls below a certain level Beta or alpha testing period ends

    What if there isn't enough time for thorough testing? Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgement skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:

    Which functionality is most important to the project's intended purpose? Which functionality is most visible to the user? Which functionality has the largest safety impact? Which functionality has the largest financial impact on users?

  • TEP Consulting & Solutions Confidential

    21

    Which aspects of the application are most important to the customer? Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex, and thus most subject to errors? Which parts of the application were developed in rush or panic mode? Which aspects of similar/related previous projects caused problems? Which aspects of similar/related previous projects had large maintenance expenses? Which parts of the requirements and design are unclear or poorly thought out? What do the developers think are the highest-risk aspects of the application? What kinds of problems would cause the worst publicity? What kinds of problems would cause the most customer service complaints? What kinds of tests could easily cover multiple functionalities? Which tests will have the best high-risk-coverage to time-required ratio?

    How does a client/server environment affect testing? Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers, especially in multi-tier systems. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing.

    How can World Wide Web sites be tested? Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:

    What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?

    Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?

    What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?

    Will down time for server and content maintenance/upgrades be allowed? how much? What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it

    expected to do? How can it be tested? How reliable are the site's Internet connections required to be? And how does that affect backup

    system or redundant connection requirements and testing? What processes will be required to manage updates to the web site's content, and what are the

    requirements for maintaining, tracking, and controlling page content, graphics, links, etc.? Which HTML specification will be adhered to? How strictly? What variations will be allowed for

    targeted browsers?

  • TEP Consulting & Solutions Confidential

    22

    Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??

    How will internal and external links be validated and updated? how often? Can testing be done on the production system, or will a separate test system be required? How

    are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing?

    How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?

    How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?

    Some usability guidelines to consider - these are subjective and may or may not apply to a given situation

    Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page.

    The page layouts and design elements should be consistent throughout a site, so that it's clear to the user that they're still within a site.

    Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type.

    All pages should have links external to the page; there should be no dead-end pages. The page owner, revision date, and a link to a contact person or organization should be included

    on each page.

    How is testing affected by object-oriented designs? Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's objects. If the application was well-designed this can simplify test design.

  • TEP Consulting & Solutions Confidential

    23

    Role Responsibility

    Project

    Manager

    Oversee the development of the PMP and the WBS

    Lead the development of the Project Management Strategy for the project

    Submits the PMP and WBS to the PAC for review and feedback

    Manage the project team to ensure project activities and tasks are implemented as defined in the PMP and WBS (The Project Manager is responsible for documenting the defining elements in the PMP and is responsible for documenting rationale for deviating from the activities identified in existing standard procedures in the PMP.)

    Coordinate and lead appropriate end-of-phase reviews as defined in the PMP

    Monitor and report actual progress against project plan to management

    Manage risk and develop contingency plans

    Primary point of contact for the Business Sponsor and software vendors if applicable

    Business

    Sponsor

    Work with the project team to develop complete requirements

    Attend appropriate end-of-phase reviews to ensure that user requirements are properly understood and incorporated into the product specifications and designs

    Represent the end-users to the project

    Create Validation plan/test case/procedures, execution results and recommendation for release.

    Requirements Document the Requirements Management Strategy

    Document the project requirements with the business sponsor

    Work with developers to ensure requirements are understood

    Work with test engineering to ensure test cases represent the business sponsor needs

    Facilitate communication between project team and business sponsor regarding requirements

    Software

    Engineering

    Document the software and system design

    Develop new report software

    Execute unit and integration testing

    Develop the software or system installation or delivery approach

    Test

    Engineering

    Document the system testing or other validation approach

    Prepare system Test plans, Test cases and procedures, Traceability Metrics as required

    Execute system tests and document the results in TAR (Test Analysis Report).

    IT quality

    assurance

    Assess application software and system development process

    Evaluate work products for compliance to procedure and plan

    Review Functional Configuration Audit

    Participate in formal reviews and change control functions.

    ITCM Control configuration changes systematically, and

    Maintain configuration integrity and traceability

    Maintain project deliverables/work products, DHF, DHR, and DMR as appropriate

  • TEP Consulting & Solutions Confidential

    24

    Acronyms: are used commonly throughout IT dept:

    ANSI American National Standards Institute

    ARR Acceptance Readiness Review

    BTCP Beta Test Cases & Procedures

    BTP Beta Test Plan

    CCB Change Control Board

    CDR Critical Design Review

    CFR Configuration Item

    CM Configuration Management

    CMM Capability Maturity Model

    CMTI Configuration Management Tool Integration

    COTS Commercial-Off-the-Shelf

    CR Change Request

    CRR Cutover Readiness Review

    CSC Computer Software Component

    CSU Computer Software Unit

    DBMS Database Management System

    DDS Document Distribution System

    DP Development Plan

    EA Engineering Assessment

    EAB Enterprise Architecture Board

    EAPER Enterprise Architecture Engagement Request

    EARC Enterprise Architecture Review Council

    ER Enhancement Request

    FDA Food and Drug Administration

    FSVF Full Site Validation

    FSVP Full Site Validation Plan

    GMP Good Manufacturing Practices

    HA Hazard Analysis

    IEEE Institute of Electrical and Electronics Engineers

    IPT Integrated Product Team

    IT Information Technology

    ITCM IT Configuration Management

    ITCP Integration Test Cases & Procedures

    IT QA IT Quality Assurance

    IT RC IT Regulatory Compliance

    IUS Intended Use Specification

    LOP local operating procedure

    LRR Live Readiness Review

    LSV Local Site Validation

    LSVP Local Site Validation Plan

    MSIT Multiple Systems Integration Test

    MTP Master Test Plan

    PAC Project Advisory Council

    PAR Product Acquisition Review

    PCA Product Configuration Review

    PDR Preliminary Design Review

    PLC Product Life Cycle

    PM P Project Manager

  • TEP Consulting & Solutions Confidential

    25

    PMLC Project Management Life Cycle

    PMP Project Management Plan

    PRB Product Review Board

    PRR Production Readiness Review

    RA Requirements Analysis

    RFP Request For Proposal

    RTM Requirements Traceability Matrix

    SDLC Software Development Life Cycle

    SDD Software Design Document

    SDR Solution Delivery Review

    SE Software Engineering

    SEI Software Engineering Institute

    SOP Standard Operating Procedure

    SPR System Problem Report

    SRR Software Requirements Review

    SRS Software Requirements Specification

    STCP System Test Cases & Procedures

    STP System Test Plan

    TAR Test Analysis Report

    TE Test Engineering

    TRR Test Readiness Review

    UM User Manual

    URR User Requirements Review

    URTM User Requirements Traceability Matrix

    URS User Requirements Specification

    US User Support

    UTCP Unit Test Cases & Procedures

    WP Work Product