Software Validation Book

Embed Size (px)

Citation preview

  • 7/30/2019 Software Validation Book

    1/102

    The Validation Specialists

  • 7/30/2019 Software Validation Book

    2/102

    An Easy to Understand Guide | Software Validation

    Publish by Premier Validation

  • 7/30/2019 Software Validation Book

    3/102

    Software Validation

    First Edition

    Copyright 2010 Premier Validation

    All rights reserved. No part of the content or the design of this book

    maybe reproduced or transmitted in any form or by any means without the

    express written permission of Premier Validation.

    The advice and guidelines in this book are based on the experience of

    the authors, after more than a decade in the Life Science industry, and as

    such is either a direct reflection of the "predicate rules" (the legislation

    governing the industry) or are best practices used within the industry. The

    author takes no responsibility for how this advice is implemented

    Visit Premier Validation on the web at www.premiervalidation.com or

    visit our forum at www.askaboutvalidation.com

    ISBN 978-1-908084-02-6

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    4/102

    So what's this book all about?

    Hey there,

    This book has drawn on years of the authors' experience in the field of

    Software Validation within regulated environments, specifically Biotech and

    Pharmaceutical. We have wrote and published this book as an aid to anyone

    either working in the software validation field, as well as for anyone that is

    interested in software testing.

    Every software validation effort seems to be a combination of checkingprocedures, updating procedures and reviewing good practice guides and

    industry trends in order to make the validation effort more robust and as

    easy as possible to put in place.

    This can often be a tiresome and long-winded process with the

    techncial wording of each piece of legislation, the standards such as ASTM,

    ISO, etc. just seem to make us validation folks feel more like lawyers half of

    the time.

    The purpose of this book is to try and pull all of that together - to make a

    Software Validaton book that is just written in an easy to understand

    language, to give help and guidance regarding the approach taken to

    validate the software whilst laying out an easy launchpad to allow users of

    the book to be able to search for more detailed information as and when it is

    required.

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    5/102

    We hope that the information in this book will be as enjoyable for you to

    use, as it was for us to put together and that your next software validation

    project will be more welcomed than not.

    So I think it's pretty clear, you've just purchased the Software Validation

    bible.

    Enjoy!

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    6/102

    The brains behind the operation!

    Program Director: Graham O'Keeffe

    Content Author: Orlando Lopez

    Technical Editor: Mark Richardson

    Editor: Anne-Marie Smith

    Printing History: First Edition: February 2011

    Cover and Graphic Design: Louis Je Tonno

    Notes of Rights

    All rights reserved. No part of this book may be reproduced, stored in a

    retrieval system, or transmitted in any form or by any means, without theprior written permission of the copyright holder, except in the case of brief

    quotations embedded in critical articles or reviews.

    Notes of Liability

    The author and publisher have made every effort to ensure the accuracy of

    the information herein. However, the information contained in this book is

    sold without warranty, either express or implied. Neither the authors andPremier Validation Ltd, nor its dealers or distributors will be held liable for

    any damages to be caused either directly or indirectly by the instructions

    contained in this book

    Published by Premier Validation Ltd

    Web: www.premiervalidation.com

    Forum: www.askaboutvalidation.com

    Email: [email protected]

    ISBN 978-1-908084-02-6

    Print and bound in the United Kingdom

    The Validation Specialists

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    7/102

    Table of Contents

    Purpose of this Document 1

    What is Software Validation? 2

    Why Validate? 3

    Validation is a Journey, Not a Destination 5

    Planning for Validation

    1: Determine What Needs to be Validated 7

    2: Establish a Framework 10

    3: Create a Validation Plan for Each System 13

    Software Development Life Cycle 16

    Validation Protocols

    Validation Protocol 23

    Design Qualification (IQ) 24

    Installation Qualification 25

    Operational Qualification (OQ) 29

    Performance Qualification (PQ) 31

    Other Test Considerations 32

    Validation Execution

    Preparing for a Test 41

    Executing and Recording Results 42

    Reporting 44

    Managing The Results 47

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    8/102

    Maintaining the Validated State

    Assessing change 50

    Re-testing 54

    Executing The Re-test 56

    Reporting 56

    Special Considerations

    Commercial 58

    Open Source Systems 62

    Excel Spreadsheets 63

    Retrospective Validation 65

    Summary 66

    Frequently Asked Questions 67Appendix A: Handling Deviations 72

    Appendix B: Handling Variances 74

    Appendix C: Test Development Considerations 77

    Appendix D: Capturing Tester Inputs and Results 81

    References 84

    Glossary 85

    Quiz 88

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    9/102

    Purpose of this Document

    This document addresses software validation for support

    systemsthat is, systems used to develop, deliver, measure, maintain, or

    assess products such as, Document Management Systems, Manufacturing

    Execution Systems (MES) and CAPA applications; manufacturing and control

    systems. The main purpose of this document is to help you establish a solid

    validation process.

    The validation procedures in this document address software that isvendor supplied, Commercial Off-The-Shelf (COTS), internally-developed

    software, or a hybrid (customized COTS software).

    Throughout this document, best practices, which although

    are not required but have been proven to be invaluable in

    validating software, will be noted by a hand symbol as shown

    here.

    1

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    10/102

    Software validation is a comprehensive and methodical approach that

    ensures that a software program does what it's intended to do and works in

    your environment as intended. Some software vendors verify that the

    requirements of the software are fulfilled but do not validate the entire

    system (network, hardware, software, processing, and so on).

    Verification is a systematic approach to verify that computerised

    systems (including software), acting singly or in combination, are fit for

    intended use, have been properly installed, and are operating correctly. This

    is an umbrella term that encompasses all types of approaches to assuring

    systems are fit for use such as qualification, commissioning and

    qualification, verification, system validation, or other.

    2

    What is

    Software Validation?

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    11/102

    Regulations often drive the need to validate software including those

    required by:

    - the United States Food and Drug Administration (FDA);

    - the Sarbanes-Oxley (SOX) Act;

    - the European Medicines Agency (EMEA) (see Eudralex)

    All countries and regions around the world have their own set of rules

    and regulations detailing validation requirements: EudraLex is the collection

    of rules and regulations governing medicinal products in the European

    Union; FDA is the US equivalent and in Japan it is the Japanese Ministry of

    Health & Welfare

    Additionally, companies operating under standards such as ISO 9001

    and ISO 13485 for medical devices also require software validation.

    But over and above regulations, the most important reason for

    software validation is to ensure the system will meet the purpose for which

    you have purchased or developed it, especially if the software is mission

    critical to your organization and you will rely on it to perform vital functions.

    3

    Why Validate?

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    12/102

    A robust Software validation effort also:

    - Utilises established incident management, change management

    and release management procedures both operationally and to

    address any errors or system issues.

    - Demonstrates that you have objective evidence to show that the

    software meets its requirements;

    - Verifies the software is operating in the appropriate secure

    environment;

    - Shows that any changes are being managed with change control

    (including the managed roll-out of upgrades) and with roll-back

    plans, where appropriate;

    - Verify the data used or produced by the software is being backed up

    appropriately and can be restored.

    - Ensures users are trained on the system and are using it within its

    intended purpose in conjunction with approved operating

    procedures (for commercially-procured software, this means in

    accordance with the manufacturers' scope of operations);

    - Ensures that a business continuity plan is in place if a serious

    malfunction to the software or environment occurs.

    4

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    13/102

    Being in a validated state, by the way, does not mean that the software

    is bug-free or that once it's validated, you're done. Systems are not static.

    Software patches must be applied to fix issues, new disk space may

    need to be added as necessary, and additions and changes in users occur.

    Being in a validated state is a journey, not a destination. It's an iterative

    process to ensure the system is doing what it needs to do throughout its

    lifetime.

    Note: Any changes to the validated system must be performed in a

    controlled fashion utilising change control procedures and performing

    documentation updates as necessary. The documentation must be a

    reflection of the actual system.

    5

    Validation is a Journey,

    Not a Destination

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    14/102

    6

    Planning for ValidationAs with most efforts, planning is a vital component for success. It's the

    same for validation. Before beginning the validation process, you must:

    1. Determine what needs to be validated;

    2. Establish a framework;

    3. Create a validation plan.

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    15/102

    The first step is to create an inventory of software systems and identify

    which are candidates for validation. A good rule of thumb is to validate all

    software:

    That is required to be validated based on regulatory requirements;

    Where there is a risk and where it can impact quality (directly or

    indirectly).

    This could include spreadsheets, desktop applications, manufacturing

    systems software, and enterprise-level applications. If your organization is

    small, this should be a relatively easy task. If you have a large organization,

    consider breaking up the tasks by functional area and delegating them to

    responsible individuals in those areas.

    Risk ManagementValidation efforts should be commensurate with the risks. If a human

    life depends on the software always functioning correctly, you'll want to take

    a more detailed approach to validation than for software that assesses color

    shades of a plastic container (assuming the color shade is only a cosmetic

    concern).

    1

    7

    Determine What Needs

    to be Validated

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    16/102

    If quality may be affected or if the decision is made that the system

    needs to be validated anyway, a risk assessment should be used to

    determine the level of effort required to validate the system. There are

    various ways to assess risk associated with a system. Three of the more

    common methods are:

    Failure Modes Effects and Analysis (FMEA) an approach that

    considers how the system could fail with analysis of the ultimate

    effects;

    Hazard Analysis a systematic approach that considers how

    systems could contribute to risks;

    Fault Tree Analysis (FTA) a bottom-up approach looking at specific

    faults (failures) and identifying what happens to realize the fault.

    For each system assessed, document the risk assessment findings, the

    individuals involved in the process, and the conclusions from the process.

    Note: ASTM E2500 Standard Guide for Specification, Design, and Verification of

    Pharmaceutical and Biopharmaceutical Manufacturing Systems and Equipment

    is a very useful tool for developing a risk based approach to validation and

    achieving QbD (Quality by Design). Similarly, ISO 14971 is a good reference for

    general risk management.

    8

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    17/102

    Formal risk assessments provide a documented repository for the

    justification of approaches taken in determining the level of validation

    required and can be used in the future for reference purposes.

    During the risk assessment, items that may lead to problems with the

    system, validation effort or both should be addressed; this is called risk

    mitigation and involves the systematic reduction in the extent of exposure to

    a risk and/or the likelihood of its occurrence (sometimes referred to as risk

    reduction).

    A system owner should be appointed overall responsibility for each

    system this person will be knowledgeable about system (or the

    business/system requirements for new systems) The person who purchased

    the product, or the person responsible for development of the system will

    usually become the system owner this person will be a key representative

    at any risk assessment.

    Document all decisions in a formal risk assessment document and make

    sure it is approved (signed-off) by all stakeholders including the System

    Owner and QA.

    9

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    18/102

    10

    2Establish a Framework

    When building a house, the first thing you must do is create a blueprint.

    A Validation Master Plan (VMP) is a blueprint for the validation of your

    software. The VMP provides the framework for how validation is performed

    and documented, how issues are managed, how to assess validation impact

    for changes, and how to maintain systems in a validated state.

    The VMP covers topics such as:

    The scope of the validation;

    The approach for determining what is validated (unless covered in a

    separate corporate plan, for example, a quality plan);

    Elements to be validated (and how to maintain the list);

    Inter-relationships between systems;

    Validation responsibilities;

    Periodic reviews;

    Re-validation approaches;

    The rationale for not validating certain aspects of a system (so you

    don't have to revisit your decision during audits);

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    19/102

    11

    The approach to validating these applications if your company has

    applications that have been in production but have not been

    validated (retrospective vs prospective)

    Then validation approach to validate a system that is already

    commissioned and live, but not formally validated.

    General time lines, milestones, deliverables and roles and

    responsibilities of resources assigned to the validation project.

    The majority of the VMP is static. To avoid re-releasing the entire

    document, maintain specific elements to be validated and the top-level

    schedule in separate, controlled documents.

    The VMP should include a section on training and the minimum level of

    training/qualification. This can either be a statement referring to a training

    plan or a high-level statement.

    The VMP should also include, in general terms, the resources (including

    minimum qualifications) necessary to support the validation. Again, you

    don't need to take it to the level of, Mary will validate the user interface

    application . It should be more like, Two validation engineers who have

    completed at least three software validation projects and XYZ training.

    Resources required may include outside help (contractors) and any special

    equipment needed.

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    20/102

    12

    Error handling

    Finally, the VMP should describe how errors (both in the protocols and

    those revealed in the software by the protocols) are handled. This should

    include review boards, change control, and so on, as appropriate for the

    company. For a summary of a typical deviation, see Appendix A. For a

    summary of protocol error types and typical validation error-handling, see

    Appendix B.

    Well-explained, rational decisions (say what you do and do what yousay) in the VMP can avoid regulatory difficulties.

    Note: For regulated industries, a VMP may also need to include non-software

    elements, such as manufacturing processes. This is outside the scope of this

    document.

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    21/102

    You must create a validation plan for each software system you need to

    validate. Like the VMP, the validation plan specifies resources required and

    timelines for validation activities, but in far greater detail. The plan lists all

    activities involved in validation and includes a detailed schedule.

    The validation plan's scope is one of the most difficult things to

    determine due to multiple considerations including servers, clients, and

    stand-alone applications. Is it required that the software be validated on

    every system on which the software runs? It depends on the risk the

    software poses to your organization or to people.

    Generally, all servers must be validated. If there are a limited number of

    systems on which a stand-alone application runs, it may be better to qualify

    each one (and justify your approach in the plan). Again, it depends on the

    risk. For example, if human lives might be at stake if the system were to fail,

    it's probably necessary to validate each one. You must assess the risks and

    document them in the appropriate risk assessment.

    3Create a Validation Plan

    for Each System

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    22/102

    14

    The results of your risk analysis become activities in the validation plan

    or requirements in your system specifications (e.g. URS). For example, if the

    risk analysis indicates that specific risk mitigations be verified, these

    mitigations become new requirements which will be verified. For example, if

    a web application is designed to run with fifty concurrent users, however

    during stress testing it is identified that the system becomes unstable after

    40 concurrent logins, then a requirement of the system must be that the

    system has adequate resources to accommodate all fifty users.

    The validation plan must also identify required equipment and whether

    calibration of that equipment is necessary. Generally, for software

    applications, calibrated equipment is not necessary (but not out of the

    question).

    The validation plan should specify whether specialized training or

    testers with specialized skills are needed. For example, if the source code is

    written in Java, someone with Java experience would be needed.

    Finally, if the software is to be installed at multiple sites or on multiple

    machines, the validation plan should address the approach to take. For

    example, if the computers are exactly the same (operating system, libraries,

    and so on) then a defensible justifiable solution for validation across the

    installation base could be to perform installation verification (or possibly

    even a subset) of each location the software is installed. To further mitigate

    risk, OQ or parts of OQ could also be run, whatever the strategy it should be

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    23/102

    documented in the VMP, Risk Assessment or both (without too much

    duplication of data).

    For customised or highly-configurable software, your plan may need to

    include an audit or assessment of the vendor. This would be necessary to

    show that the vendor has appropriate quality systems in place, manages and

    tracks problems, and has a controlled software development life cycle.

    If safety risks exist, you will be in a better defensible position if you

    isolate tests for those risks and perform them on each installation.

    The VP/VMP are live documents that are part of the System

    Development Lifecycle (SDLC) and can be updated as required. Typical

    updates can include cross reference changes and any other detail that might

    have changed due to existing business, process or predicate quality system

    changes.

    15

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    24/102

    Both the development of software and the validation of that software

    should be performed in accordance with a proven System Development

    Lifecycle (SDLC) methodology.

    A properly implement SDLC allows the system and validation

    documentation to be produce in a way such that a great level of

    understanding about the system can be gained by the design,

    implementation and validation teams whilst putting down the foundations

    for maintenance and managing changes and configurations.

    There are many SDLC models that are acceptable and there are benefits

    and drawbacks with each. The important thing is that:

    There is objective evidence that a process was followed;

    There are defined outputs that help ensure the effort is controlled.

    It's not always the case that a software product is developed under such

    controlled conditions. Sometimes, a software application evolves from a

    home-grown tool which, at the time, wasn't expected to become a system

    that could impact the quality of production.

    16

    Software Development

    Life Cycle

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    25/102

    Similarly, a product may be purchased from a vendor that may or may

    not have been developed using a proven process. In these cases, a set of

    requirements must exist, at a minimum, in order to know what to verify, and

    the support systems must be established to ensure the system can be

    maintained.

    Let's look at a typical V development model:

    In this model, the defined outputs of one stage are inputs to a

    subsequent stage. The outputs from each stage are verified before moving

    to the next stage. Let's look at the requirements and support stages a bit

    more closely.

    17

    requirementsanalysis

    Systemtesting

    High level

    design

    Integration

    testing

    Unit testingDetaileddesign

    implementation

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    26/102

    The Requirements stage

    Requirement specifications are critical components of any validation

    process because you can verify only what you specifyand your verification

    can be only as good as your specifications. If your specifications are vague,

    your tests will be vague, and you run the risk of having a system that doesn't

    do what you really want. Without requirements, no amount of testing will

    get you to a validated state.

    How to specify requirements

    Because systems are ever evolvingwhether to fix problems, add a

    new capability, or support infrastructure changesit's important to keep

    the requirements specifications (User/Functional and Design Specs) up to

    date. However, if requirements are continually changing, you run the risk of

    delays and synchronization issues and cost overruns. To avoid this,

    baseline your requirements specifications and updates so they are

    controlled and released in a managed manner. Well-structured specification

    documents facilitate upkeep and allow for the verification protocols to be

    structured so that ongoing validation activities can be isolated to only the

    changed areas and regression testing.

    Requirements must be specified in a manner in which they can be

    measured and verified.

    18

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    27/102

    Use shall to specify requirements. Doing so makes verifiable

    requirements easy to find. Each requirement should contain

    only one shall. If there is more than one, consider breaking

    up the requirement.

    It's easy, by the way, to create statements of fact requirements, which

    force the testing of something that adds little value. For example, The user

    interface module shall contain all the user interface code. Yes, you could

    verify this, but why? And what would you do if it failed? Completely

    restructuring the code would greatly increase risk, so it's likely nothing

    would be done. These types of requirements can still be stated in a

    specification as design expectations or goals, just not as verifiable

    requirements.

    Requirements traceability

    Part of the validation effort is to show that all requirements have been

    fulfilled via verification testing. Thus, it's necessary to trace requirements to

    the tests. For small systems, this can be done with simple spreadsheets. If

    the system is large, traceability can quickly become complex, so investing in

    a trace management tool is recommended. These tools provide the ability to

    generate trace reports quickly, sliced and diced any way you want.

    19

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    28/102

    Additionally, some tools provide the ability to define attributes. The

    attributes can then be used to refine trace criteria further. Attributes can

    also be used to track test status. The tracing effort culminates in a Trace

    Matrix (or Traceability Matrix). The purpose of a matrix is to map the design

    elements of a validation project to the test cases that verified or validated

    these requirements. The matrix becomes part of the validation evidence

    showing that all requirements are fully verified.

    Tracing and trace reporting can easily become a project in itself. The tools

    help reduce effort, but if allowed to grow unchecked they can become a

    budget drain.

    Requirements reviewsRequirements reviews are key to ensuring solid requirements. They are

    conducted and documented to ensure that the appropriate stakeholders are

    part of the overall validation effort. Reviewers should include:

    System architects, to confirm the requirements represent and can

    support the architecture;

    Developers, to ensure requirements can be developed;

    Testers, to confirm the requirements can be verified;

    Quality assurance, to ensure that the requirements are complete;

    Caution:

    End users and system owners.

    20

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    29/102

    Validation Maintenance and Project Control

    Support processes are an important component of all software

    development and maintenance efforts. Even if the software was not

    developed under a controlled process, at the time of validation the following

    support processes must be defined and operational to ensure that the

    software remains in a validated state and will be maintained in a validated

    state beyond initial deployment:

    Reviews (change reviews, release reviews, and so on); Document control;

    Software configuration management (configuration control,

    release management, configuration status accounting, and so on);

    Problem/incident management (such as fault reporting and

    tracking to resolution);

    Change control.

    21

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    30/102

    Design Qualification (DQ)

    Installation Qualification (IQ)

    Operational Qualification (OQ)

    Performance Qualification (PQ)

    Other Considerations

    22

    Validation Protocol

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    31/102

    So now that you've assigned your testers to develop protocols to

    challenge the system and you have solid requirements, you must structure

    the tests to fully address the system and to support ongoing validation.

    The de-facto standard for validating software is the IQ/OQ/PQ

    approach, a solid methodology and one that auditors are familiar with (a

    benefit). Not following this approach shouldn't get you written up for a

    citation, but expect auditors to look a little more closely.

    Note:

    Appendix C describes several aspects of test development. These are general

    guidelines and considerations that can greatly ensure completeness and

    viability of tests.

    An example V-Model Methodology Diagram is depicted above on page x of y

    23

    Validation Protocols

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    32/102

    Design Qualification (DQ) is an often overlooked protocol, but can be a

    valuable tool in your validation toolbox. You can use DQ to verify both the

    design itself and specific design aspects. DQ is also a proven mechanism for

    achieving Quality by Design (QbD).

    If software is developed internally and design documents are produced,

    DQ can be used to determine whether the design meets the design aspects

    of the requirements. For example, if the requirement states that a system

    must run on both 16-bit and 32-bit operating systems, does the design take

    into account everything that the requirement implies?

    This, too, is generally a traceability exercise and requires expertise in

    software and architecture. Tracing can also show that the implementation

    fully meets the design. In many cases, this level of tracing is not required. It's

    often sufficient to show that the requirements are fully verified through test.

    The DQ may also be used to address requirements that don't lend

    themselves to operations-level testing. These are generally non-functional,

    static tests. The concept is mostly out of system validations (for example, the

    system shall use a specific camera or a specific software library shall be

    used), but can be applied to software as well.

    24

    Design

    Qualification DQ

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    33/102

    Installation Qualification (IQ) consists of a set of tests that confirm the

    software is installed properly. IQ may verify stated requirements. Where this

    is the case, the requirements should be traced to the specific test

    objective(s).

    There are three aspects to assessing IQ:

    Physical installation;

    Software lifecycle management;

    Personnel.

    Where the system is client-server based, both physical installation and

    software lifecycle management must be addressed. The client side is often

    the most difficult due to the difficulty of maintaining the configuration.

    Physical installation

    There are many factors that can be considered for software Installation

    Qualification, including:

    Amount of memory available on the processor where the software

    will run;

    25

    Installation

    Qualification IQ

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    34/102

    Type of processor (for example, 16-bit, 32-bit) on which the

    software will run;

    Available disk space;

    Support applications (including revisions);

    Operating system patches;

    Peripherals.

    A potential issue with some applications, especially those that are web-

    based, is the browser. If you're a Firefox user, for example, you've probably

    gone to some sites where the pages don't display correctly and you can see

    the full site only if you use Internet Explorer. These cases may require that

    every client configuration be confirmed during IQ.

    Where commercial products or components are used, the

    requirements for them need to be well understood and, if not documented

    elsewhere, documented (and verified) in the IQ.

    The cloud and other virtual concepts are tough because you

    haverelinquished control over where your application runs or where the

    data is stored. Does that rule out the use of such environments in a regulated

    environment? Not necessarily. Again, it comes back to risk.

    If there's no risk to human life, then the solution may be viable. If the

    system maintains records that are required by regulation, it will take

    considerable justification and verification. And, as always, if the decision is

    made to take this direction, document the rationale and how any risks are

    26

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    35/102

    mitigated. In certain cases the Risk Assessment may require re-visiting.

    Software Lifecycle Management

    The IQ should assess the software's lifecycle management aspects to be

    sure that necessary support processes are in place to ensure the software

    can continually deliver the expected performance throughout its life. These

    include:

    General management processes (change management, software

    configuration management, problem management);

    Maintenance processes (backup and recovery, database

    maintenance, disk maintenance).

    Verification that appropriate Disaster Recovery procedures are in

    place.

    These areas are more abstract than the physical aspects. It's fairly easy

    to determine if the specified amount of disk space is available. It's far more

    difficult to determine if the software content management system is

    sufficient to ensure the proper configuration is maintained across multiple

    branches. Clearly, though, these aspects cannot be ignored if a full

    assessment is to be made of the software's ability to consistently deliver

    required functionality. Most large companies have specialists that can help

    in these areas. If such specialists are not available, outside consulting can

    provide invaluable feedback.

    If the software is or has components that are purchased (commercially

    or contracted), the efforts will need to be assessed both internally and

    27

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    36/102

    externally. For example, if your company purchases and distributes a

    software package, you will need to accept problems from the field and

    manage them. You will likely pass on these problems to the vendor for

    correction. So, even if some processes are contracted out, it does not relieve

    you of the responsibility to ensure their adequacy.

    Audits can be carried out either internally or externally to verify that

    software lifecycle components are being addressed and managed correctly.

    The finding of such audits is usually written-up in an audit report and may be

    cross referenced from a validation report as necessary.

    Personnel

    If the use of a system requires special training, an assessment needs to

    be made to determine if the company has the plans in place to ensure that

    users are properly trained before a system is deployed. Of course, if the

    system has been deployed and operators are not adequately trained, this

    would be a cause for concern.

    Consider usage scenarios when assessing training needs. For example,

    general users may not need any training. Individuals assigned to database

    maintenance, however, may require substantial skills and, thus, training.

    As part of any validation effort, training must be verified an

    appropriate training plan must be in place to ensure that all users are trained

    and that any changes to systems or processes trigger additional training. All

    training must be documented and auditable. Training is a GxP requirement.

    28

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    37/102

    Operational Qualification (OQ) consists of a set of tests that verify that

    requirements are implemented correctly. This is the most straight-forward

    of the protocols: see requirement, verify requirement. While this is a gross

    simplificationverifying requirements can be extremely challengingthe

    concept is straightforward. OQ must:

    Confirm that error and alarm conditions are properly detected and

    handled;

    Verify that start-ups and shutdowns perform as expected;

    Confirm all applicable user functions and operator controls;

    Examine maximum and minimum ranges of allowed values.

    OQ Tests the Functional Requirements of the system.

    OQ can also be used to verify compliance to required externalstandards. For example, if 21 CFR Part 11 is required, OQ is the place where

    the system's ability to maintain an audit trail is confirmed.

    Be sure to have clear, quantifiable expected results. If you have vague

    requirements, verification is difficult to do. For example, if a requirement

    was established for the system to run fast, it's not possible to verify this.

    29

    Operational

    Qualification OQ

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    38/102

    Fast is subjective. Verifiable requirements are quantifiable (for

    example, Response time to a query shall always be within 15 seconds. A

    good expected result will give the tester a clear path to determine whether

    or not the objective was met. If vague requirements do slip through,

    however, at least define something quantifiable in the test via a textual

    description of the test objectives.

    30

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    39/102

    Performance Qualification (PQ) is where confirmation is made that a

    system properly handles stress conditions applicable to the intended use of

    the equipment. The origination of PQ was in manufacturing systems

    validation, where PQ shows the ability of equipment to sustain operations

    over an extended period, usually several shifts.

    Those concepts don't translate well to software applications. There are,

    however, good cases where PQ can be used to fully validate software. Web-

    based applications, for example, may need to be evaluated for connectivity

    issues, such as what happens when a large number of users hit the server at

    once. Another example is a database application. Performance can be

    shown for simultaneous access and for what happens when a database

    begins to grow.

    Critical thinking about what could impact performance is key to

    developing a PQ strategy. It may well be the case that a PQ is not applicable

    for an application. The decision and rationale should, once again, be

    documented in the validation plan.

    31

    Performance

    Qualification PQ

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    40/102

    So if you do IQ, OQ, and PQ, do you have a bug-free system? No. Have

    you met regulatory expectations? To some degree, yes. You are still,

    however, expected to deliver software that is safe, effective, and, if you want

    return business, as error free as possible.

    Generally, most validation functional testing is, black box testing.

    That is, the system is treated as a black box: you know what goes in and

    what's supposed to come out. (As opposed to white-box, where test design

    allows one to peek inside the "box," and focuses specifically on using

    internal knowledge of the software to guide the selection of test data.)

    There are a number of other tools in the validation toolbox that can be

    used to supplement regulatory-required validation. These are not required,

    but agencies such as the US FDA have been pushing for more test-based

    analysis to minimize the likelihood of software bugs escaping to the

    customer. They include:

    Static analysis;

    Unit-level test;

    Dynamic analysis; Easy to Understand Guide to oftware

    32

    Other Test

    Consideration

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    41/102

    Ad-hoc, or exploratory testing;

    Misuse testing.

    We'll briefly look at these tools. Generally, you want to choose the

    methods that best suit the verification effort.

    Static analysis

    Static analysis provides a variety of information, from coding style

    compliance to complexity measurements. Static testing is gaining more and

    more attention by companies looking to improve their software, and by

    regulatory agencies. Recent publications by the US FDA have encouraged

    companies to use static analysis to supplement validation efforts.

    Static analysis tools are becoming increasingly sophisticated, providing

    more insight into code. Static analysis can't replace formal testing, but it can

    provide invaluable feedback at very little cost.

    Static analysis can generate many warnings, and each must be

    addressed. This doesn't mean they need to be corrected, but you do need to

    assess them and determine if the risk of making changes outweighs the

    benefits. For working software, even a change to make variables complywith case standards is probably too risky.

    Static analysis is best done before formal testing. The results don't need

    to be included in the formal test report. A static analysis report should,

    however, be written, filed, controlled, and managed for subsequent

    33

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    42/102

    retrieval. The report should address all warnings and provide rationale for

    not correcting them. The formal test report can reference the static analysis

    report as supplemental information; doing so, however, will bring the report

    under regulatory scrutiny, so take care in managing it.

    Unit-level test

    Some things shouldn't be tested from a user perspective only. Examples

    using this approach include scenarios where a requirement is tested in a

    stand-alone environment. For example, when software is run in debug

    mode and breakpoints are set, or verifying results via software code

    inspection.

    Another good use of unit testing is for requirements that are not

    operational functionality but do need to be thoroughly tested. For example,

    an ERP system with a requirement to allow an administrator to customise a

    screen. A well-structured unit test or set of tests can greatly simplify matters.

    Unit tests and test results are quality records and need to be managed

    as such. There are several methods you can use to cite objective evidence to

    verify requirements using unit-level testing. One way is to collect all original

    data (executed unit tests) and retain in a unit test folder or binder (in

    document control). The test report could then cite the results. Another way

    is to reference the unit tests in formal protocols. Using this method, the unit

    tests can either be kept in the folder or attached to the formal protocols.

    34

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    43/102

    Version identifiers for unit tests and for referencing units

    tested greatly clarify matters when citing results. For example,

    revision A of the unit test may be used on revision 1.5 of the

    software for the first release of the system. The system then

    changes, and a new validation effort is started. The unit test

    needs to change, so you bump it to revision B and it cites

    revision 2.1 of the software release. Citing the specific version

    of everything involved in the test (including the unit test)

    minimizes the risk of configuration questions or mistakes.

    Dynamic analysis

    Dynamic analysis provides feedback on code covered in testing and is,

    generally, for non-embedded applications as the code has to be

    instrumented (typically an automated process done by the tool;

    instrumenting allows the tool to know the state of the software and which

    lines of code were executed as well as providing other useful information)

    and generates data that has to be output as the software runs. The fact that

    the software is instrumented adds a level of concern regarding the results,

    but in a limited scope gives an insight into testing not otherwise possible.

    Dynamic analysis is also best done prior to and outside the scope of

    formal testing. Results from a dynamic analysis are not likely needed in the

    formal test report, but if there's a desire to show test coverage it can be

    35

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    44/102

    included. Again, referencing the report would bring it under regulatory

    scrutiny, so be sure it's very clear if taking that route.

    Ad-hoc or exploratory testingStructured testing, by nature, cannot cover every aspect of the

    application. Another area getting critical acclaim from both internal testing

    and from regulatory agencies is ad-hoc, or exploratory, testing. With a good

    tester (critical thinking skills), exploratory testing can uncover issues,

    concerns, and bugs that would otherwise go undetected, until the product

    hits the field.

    We'll use an application's user interface as an example. A text box on a

    screen is intended to accept a string of characters representing a ten

    character serial number. Acceptable formal testing might verify that the field

    can't be left blank, the user can't type more than 10 characters, and that a

    valid string is accepted. What's missing? What if someone tries to copy and

    paste a string with embedded carriage returns, or a string greater than ten

    characters? What happens if special characters are entered? Again, the

    possibilities are nearly endless.

    All scenarios can't be formally tested, so ad-hoc testing provides a

    vehicle to expand testing. This is largely informal testing carried out by the

    developer and SMEs to eliminate as many problems as possible before dry-

    running and demonstrating to the user base.

    36

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    45/102

    The problem lies in how to manage exploratory testing and reporting

    the results. All regulatory agencies expect protocols to be approved before

    they are executed. This is not possible with exploratory testing since the

    actual test results aren't discovered until the tester begins testing. This is

    overcome by addressing the approach in the VMP and/or the product-

    specific validation plan. Explain how formal testing will satisfy all regulatory

    requirements, and then exploratory testing will be used to further analyze

    the software in an unstructured manner.

    Reporting results is more difficult. You can't expect a tester to jot down

    every test attempted. This would be a waste of time and effort and would

    not add any value. So, it's reasonable to have the tester summarize the

    efforts undertaken. For example, using the user interface example, the

    tester wouldn't report on every test attempted on the text box; instead, the

    tester would report that exploratory testing was performed on the user

    interface, focusing on challenging the text box data entry mechanism. Such

    reporting would be more suited to a project reporting mechanism and

    information sharing initiative rather than being formal validation testing.

    Note: It is perfectly acceptable for the developers to test their work prior to formal

    validation testing

    37

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    46/102

    All failures must be captured, so it's a good idea to include tests

    that failed into formal protocols. Additionally, encourage the

    tester to capture issues (things that didn't fail but didn't give

    expected results) and observations (things that may just not

    seem right). The change management process can control any

    changes required.

    Misuse testing is a hybrid of formal testing and exploratory testing. For

    example, if software is running on a device that has a software-controlled

    button, misuse testing would include actions such as holding the button

    down, or rapidly pressing the button. Unlike exploratory testing, all tests

    attempted should be documented and the results captured in a formal test

    report.

    People with general knowledge of the technology but not of the

    software makes good misuse testers. They aren't biased by any

    implementation details. If they do something wrong, it may be an

    indication of a potential problem with the software.

    38

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    47/102

    39

    Preparing for a test

    Executing and recording results

    Reporting

    Managing the results

    Validation Execution

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    48/102

    Execution of validation protocols is pretty straightforward: follow the

    protocols. Of course, few things go as planned, so in addition to discussing

    the basics of execution, we'll also discuss how to handle anomalies.

    Before jumping into protocol execution, conduct a Test

    Readiness Review, before the start of each phase (before IQ,

    before OQ, and so on) ideally. This review assesses the

    organization's readiness to begin validation work. All

    requirements are reviewed to ensure both human and

    equipment resources are available and in the proper state (for

    example, testers are adequately trained, or equipment is

    available and within calibration).

    40

    Validation Execution

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    49/102

    The first step in preparing for a test is to establish the environment,

    which should be as close to production as possible. This means that the

    software is:

    Developed using a defined, controlled configuration management

    and build process;

    Installed according to the defined installation procedure;

    Installed on production or production-equivalent equipment.

    These elements would be part of the Test Readiness Review. If,

    for example, production-equivalent equipment is not

    available, the team could analyze what's available and

    determine if some or all of testing can proceed. Such risk

    assessments must be documented; the Test Readiness Reviewminutes are a good place.

    In parallel, you must prepare to identify and allocate test personnel.

    Test personnel must be sufficiently trained, educated, and experienced to

    properly execute the protocols. Personnel skill requirements should have

    been detailed in the validation plan (or associated training plan) so that

    41

    Preparing for a test

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    50/102

    personnel decisions are not arbitrary.

    Furthermore, you must make sure test personnel are not put in conflict-

    of-interest positions. Ideally, a separate QA team should be used. Clearly,

    members of the development team should not be selected, but the lines get

    fuzzy quickly after that. Just make sure that your selection is defensible. For

    example, if a developer is needed to perform a unit test (typical, since QA

    folks may not have the code-reading skills of a developer), then ensure the

    developer is not associated with the development of that unit.

    Note: It is absolutely forbidden for anyone to review AND approve their own work.

    Executing and recording results

    Record all test results using Good Documentation Practices (GDP)! A

    summary of what to capture is provided in Appendix D.

    Minimize annotations, but don't hesitate to make them if it helps clarify

    results or test efforts. For example, if a test is stopped at the end of one day

    and resumed the next day, an annotation should be made to show where

    and when (the day) testing stopped and where and when testing resumed.

    Similarly, if a test is performed by multiple testers, a sign-off should be

    available for all testers. If this is not possible, then an annotation should be

    made indicating which steps were executed by whom, and which steps were

    performed by which tester.

    Mistakes will invariably be made. Again, use GDP to correct the mistake

    and provide an explanation.

    42

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    51/102

    If a screen shot is captured to provide evidence of compliance, the

    screen shot becomes a part of the test record. It's important, therefore, to

    properly annotate the screen shot. Annotations must include:

    A unique attachment reference (for example Attachment 1 of

    VPR-ENG-001);

    The tester's initials (or signature, if required by company

    procedures);

    The date the screenshot was taken;

    A reference to the test protocol and test step;

    Correct pagination in the form page x of y (even if a single page).

    Annotations must ensure complete linkage to the originating test and

    are in line with GDP.

    Protocol variances are to be handled as defined in the VMP. Appendix B

    provides a summary of standard techniques. Company policies and

    procedures must be followed, as always.

    Issues and observations should be noted by the tester. Support tools

    (for example, problem reporting systems) should be provided to facilitate

    such reporting, but do not need to be detailed in the test report since they

    don't represent a test failure.

    If you want to learn more about Good Documentation Practices why not

    purchase a copy of our E-Book An Easy to Understand Guide to Good

    Documentation Practices -> Go to www.askaboutvalidation.com

    43

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    52/102

    Reporting

    At its core, the test report summarizes the results (from IQ, OQ, and PQ)

    of the test efforts. Testers and reviewers involved in testing, along with test

    dates, should be recorded. In general, a report follows the following outline:

    I. System Description

    II. Testing Summary (test dates, protocols used, testers involved)

    III. Results

    IV. Deviations, Variances, and Incidents

    V. Observations and Recommendations

    VI. Conclusions

    In many cases, there will be variances (for example, the test protocol

    steps or the expected results were incorrect) and/or deviations (expected

    results not met), which should be handled in accordance with the VMP.

    Generally, variances and deviations are summarized in the test report,

    showing that they are recognized and being dealt with properly.

    Failures, on the other hand, must be explicitly described and explained.

    For each failure, provide a link to the formal problem report. It's typical tosummarize the results in the Results section of the report and then use an

    appendix to provide additional details (for example, a problem report

    tracking number).

    It's also a good practice to allow the test team to provide

    recommendations. For example, a tester could observe that in a particular

    44

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    53/102

    situation the system runs extremely slow. Perhaps the requirements were

    met, but the customer or end user may not be happy with the application.

    Allowing recommendations in the report can highlight areas that may needfurther investigation before launch. The report should draw a conclusion

    based on objective evidence that the product:

    Sufficiently meets the requirements

    Is safe (per verified risk mitigations)

    Can consistently fulfill its intended purpose (or not).

    Observations and recommendations should be followed up by the

    development team.

    If there are test failures, a risk assessment should be performed to

    determine whether the system can or cannot be used in a production

    environment. For example, if a requirement specifies that a response to a

    particular input is made within five seconds, but one response time comes

    back after five seconds, an assessment can be performed. If the team agrees

    that this is acceptable for production (and this is the only failure), the

    rationale can be documented in the test report (or release report), and the

    system can be accepted for production use. Of course, if the failure is in a

    safety-critical area, there's probably no reasonable rationale for releasing

    the system. (presuming it's justifiable).

    45

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    54/102

    If the validation effort is contracted out, however, the test report may

    not be available or appropriate to determine whether the system can be

    released with test failuresespecially if the impact of some of the failures

    cannot be fully be assessed by the test team. In such cases, it's acceptable for

    a report addendum or a separate release report to be produced with the

    company's rationale for releasing the system (presuming it's justifiable).

    Depending on the size of the system, multiple reports can be used as

    required. For example, a unit testing report may be issued after a batch ofunit tests have completed; or on the other side of the scale there may be a

    requirement for each test to be reported on in its own right.

    46

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    55/102

    Managing the results

    User requirements;

    Supporting documentation (internal and external user manuals,

    maintenance manuals, admin, and so on);

    Vendor data (functional specification, FAT, SAT, validation

    protocols);

    Design documentation;

    Executed protocols;

    An archive copy of the software (including libraries, data, and so

    on);

    Protocol execution results;

    The test report;

    A known bug list with impact assessment.

    Then, when the auditor asks whether the software has been validated,

    present the Validation Pack (or at least take them to the relevant library).

    You'll make his or her day.

    Testing is done, the software is released, and you're making money.

    What could go wrong? Audits and auditorsthe reality that everyone in a

    regulated industry faces. You can mitigate potential problems by preparing a

    Validation Pack that contains:

    47

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    56/102

    48

    Assessing Change

    Re-testing

    Executing the re-test

    Reporting

    Maintaining the

    Validated state

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    57/102

    It's common knowledge that changes increase the risk of introducing

    errors. The current validation is challenged by:

    Any change to the system's environment;

    Any change to requirements or implementation;

    Daily use, as databases grow, disks fill, and additional users are

    added.

    That's why it's critical to assess validated systems continually and take

    action when the validation is challenged. As stated earlier, validation is a

    journey, not a destination. Once you achieve the desired validated state,

    you're not finished. In fact, it's possible the work gets harder.

    49

    Maintaining

    The Validated State

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    58/102

    Change will happen, so you might as well prepare for it. Change comes

    from just about everywhere, including:

    Specification changes (users want new features, different modes);

    Software changes (driven by specification changes and bug fixes).

    Infrastructure changes (additions of new equipment on the

    network, changes to the network architecture, new peripherals

    installed, change from local servers to the cloud);

    System upgrades (auto- and manually-installed upgrades to the

    operating system, tools, and libraries);

    Interface changes (to front-end or back-end with which the system

    communicates, browser changes);

    User changes (new users, change of user roles, modification of user

    permissions); An aging system (databases grow, disks fill up, more users slow

    down the system);

    Expansion (the system needs to be installed and used at another

    site).

    50

    Accessing Change

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    59/102

    These are just a few examples. The answer is not to re-run all validation

    tests on a weekly basis. That would be a waste of time and money. So, how

    do you know what changes push you out of validation and require re-test?

    Risk Assessment. The easiest changes to analyze are specification and

    software changes. They are, generally, controlled pretty well in terms of

    knowing when changes are made, the scope of the change, and the likely

    impact.

    Assess each change in terms of risk to the existing validation results.Could the change affect the results of the test? If so, it's likely some re-

    testing is in order.

    For software changes, it's important to understand the scope of the

    change and assess the potential impacts so you can define a rational

    regression test effort, in addition to the tests for the specific changes.

    Timing releases

    So, you can see that you should plan out updates and not push releases

    out on a daily or weekly basis. Of course, if a customer demands a new

    release, or there are critical bugs that need to be corrected, you don't want

    to delay installing it. But delaying releases saves time because you can do the

    re-test on a batch of changes where there are likely overlaps in testing.

    51

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    60/102

    Risk analysis and the Trace Matrix

    Risk analysis is best facilitated through the Trace Matrix. Using the

    matrix enables you to:

    See what requirements were affected;

    Identify related requirements;

    Establish the full regression test suite.

    The applicable segments of the Trace Matrix should be used to

    document the rationale for the test and regression suite.

    Indirect changes

    How do you address indirect changesthat is, those changes that you

    may not have direct control over, or potentially not even know about?

    Understand that anything your IT folks do may well have an impact on

    the validated states of your systems. So, the first order of business is to get

    friendly with your IT staff. Establish a working relationship with them so

    you'll be able to monitor their plans. This is not to be obstructive, it's just

    good planning. The earlier on in planning you can get involved, the less

    chaotic things will be after the changes are made. Dealing with a non-

    compliance audit because of an IT change will cost more than dealing with

    any potential problems up front.

    Again, assess all changes in terms of system impact. Most likely, the IQ

    will be impacted and those tests may need to be re-developed before re-

    executing.

    52

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    61/102

    For cases where the install base must be expanded (additional sites,

    additional equipment, and so on), you'll need to assess the level of

    validation required for each additional installation. If this has already been

    addressed in the validation plan, the effort should be consistent with the

    plan. If, however, it makes sense to expand the test effort based on

    experience, you should update the validation plan and perform the extra

    testing as required. If it's not addressed in the validation plan, perform a risk

    assessment, update the validation plan (so you don't have to re-visit the

    exercise should the need arise again in the future), and perform the

    validation.

    General Observations

    General observations are another source of input to the risk analysis.

    Are customers or users complaining that the system is slower? Do some

    transactions time out? These factors may signify a degrading system. Don't

    ignore such complaints; monitor them closely. They could be leading

    indicators that the performance (PQ) is no longer adequate.

    53

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    62/102

    Re-testing

    Risk assessment gives you a good idea what to re-test. But is that

    sufficient? Before launching the re-test effort, take a step back and re-read

    the VMP. Make sure that you're staying compliant with what the master plan

    says. Then, update the validation plan for the system. This will:

    Lay out what tests you plan to execute for the re-test;

    Provide the rationale from the risk analysis for the scope of the

    retest.

    In most cases, not all tests in the validation suite will need to be re-run.

    For example, if the change is isolated to a user interface screen (a text box

    went from 10 characters long to 20 characters), you likely don't need to re-

    run IQ. Consider the back-end consequences, however, because it's possible

    that such a change fills up a database faster, so be sure to look at the big

    picture.

    As mentioned previously, a well-structured test suite facilitates

    isolating the testing to a limited scope. For example, if you have a user

    interface (UI) protocol and only the UI changed, it may be possible to limit

    54

    Re-testing

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    63/102

    testing to just the UI protocol. If, however, you scatter UI tests throughout

    the functional tests, it may not be possible to test the UI in isolation and,

    depending on the change, may require all tests to be executed.

    Regression testing

    You may run into a case where something changed that was considered

    isolated to a particular function, but testing revealed it also impacted

    another function. These regressions can be difficult to manage. Such

    relationships are generally better understood over time.

    The best advice is to look for any connections between any changes and

    existing functionality and be sure to include regression testing (testing

    related functions even though no direct impact from the known changes) in

    the validation plan. Better to expand test scope and find them in testing than

    let a customer or user find them.

    Test Invalidation

    Hopefully, you never run into a situation where something turns up that

    invalidates the original testing, but it's been known to happen.

    For example, if calibrated equipment used in the test was found to be

    out of calibration in the next calibration update, there's no fault and

    deception, but you must re-run the test. Or, worse-case scenario, a test

    failed originally but the tester didn't write it up as a failure (due to fear, or

    some other reasons) and passed it. This is fraud, by the way. In this case, it's

    55

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    64/102

    possible that this is a systemic issue and auditors may not have any faith in

    any of the results, so plenty of analysis and rationale will have to go into test

    scope reduction. In fact, this may even require a CAPA to fully reveal the root

    cause and fix the systemic issue. But better to find it, admit it, and fix it than

    have an auditor find it.

    Executing the re-test

    No matter how good your tests are structured, there will likely be some

    things that are simply out of scope. For example, if you structured your tests

    to isolate the UI (User Interface), and then had only minor change to the UI, it

    probably doesn't make sense to re-test the entire UI. Instead of writing a

    new protocol, one approach is to note which steps will be executed in the

    updated validation plan and then strike through the unused steps and mark

    as N/A (using GDP). The validation plan and mark-ups absolutely have to jive,

    so check and double-check. Explanatory annotations on the executed test

    (referring to the validation plan) also help.

    Reporting

    Report the re-test effort similar to the results from the original

    execution. Show that the re-validation plans were met through testing and

    report the results. Handle failures and deviations as before.

    56

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    65/102

    57

    Commercial

    Open Source Systems

    Excel Spreadsheet

    Retrospective Validation

    Special Consideration

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    66/102

    58

    Commercial

    Commercial applications have a supplier and, thus, the purchase falls

    under supplier management. This means the supplier must be approved, so

    you must have criteria to approve a supplier.

    Since commercial software falls under regulatory scrutiny, part of the

    supplier approval criteria must include confirmation that the software is

    developed and maintained in a controlled manner.

    Depending on the level of risk or how critical the application is, an

    assessment of the vendor's capabilities, including and up to an audit of the

    vendor, may be warranted. Any such assessment should be available in the

    Validation Pack. Supplier Assessments are common are in the validation

    world.

    One of the issues when validating purchased software is how to verify

    large, commercial packages. Many companies purchase packages for

    enterprise-level systemsEnterprise Resource Planning (ERP),

    Manufacturing Resource Planning (MRP), Electronic Document

    Management System (EDMS), and so on. By nature, these are do all

    applications. Most companies use only a fraction of the capabilities and,

    typically, tailor the use of the system to their specific needs.

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    67/102

    Some systems allow add-on applications to be built either by the

    provider or by the client. If you don't use certain built-in capabilities of an

    application, those capabilities need not be in the validation scope.

    When you purchase software, you can:

    Use it out-of-the-box;

    Tailor it for use in your facility;

    Customize it.

    Using out-of-the-box

    Since validation is done on the system's intended use, for out-of-the-

    box systems, the testing part of validation would be only on how it's used in

    your facility (per your specifications on how you expect it to work).

    Tailoring

    Tailoring takes the complexity to the next level. In addition to testing for

    your intended use, you should also add tests to verify that the customized

    components function as required, consistently.

    Frequently, systems allow you to define levels of security, or assign

    users to pre-defined levels of security (such as administrator, maintenance,

    engineer, operator, and so on). Part of the testing, in this case, would include

    that users are defined with the appropriate security access levels and the

    specific correctly configured policies enforce any restrictions imposed on

    the various user groups.

    59

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    68/102

    Customizing

    Customized systems are the most complex in terms of test effort.

    Customizations need to be thoroughly tested to ensure the custom

    functions perform as specified.

    In addition, substantial regression testing must be performed to ensure

    that related areas of the base (out-of-the-box) functionality are not

    adversely impacted.

    Due diligence

    Regardless of how the software is incorporated into use, due diligence

    should be performed to get problem reports related to commercial

    software. These may be available online through the vendor site, or may

    need to be requested from the vendor. Once obtained, be sure to assess

    each issue against how the software will be used at your site to determine if

    concessions or workarounds (or additional risk mitigations) need to be

    made to ensure the software will consistently meet its intended use. This

    analysis and any decisions become part of the validation file.

    For systems that are customized by the vendor, there may be

    proprietary information involved in implementing the

    customizations. Ensure that the contracts protect your

    Intellectual Property (IP).

    60

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    69/102

    Protecting yourself

    With commercial software, especially mission-critical software, you are

    at risk if the vendor is sold or goes out of business. One way to protect your

    company is to put the software into an escrow account. Should the

    company fold or decide to no longer support the software, at least the

    software source can be salvaged. This has inherent risks. For example, now

    that you have the software, what do you do with it? Most complex

    applications require a certain level of expertise to maintain. This all needs to

    be considered when purchasing an application.

    Documentation

    A How We Use the System Here or "SOP (Standard Operating

    Procedure)" specification facilitates the validation effort regardless of how

    the system is incorporated into the environment. The validation plan

    specifies how commercial systems will be validated.

    SOPs are written to detail how the system should be operated,

    everyone must follow the SOPs meaning that the system is used in a

    consistent fashion at all times.

    61

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    70/102

    Conventional wisdom says to stay away from open source systems.

    Many say that you can't expect quality when using an open source system.

    From experience, however, a widely-used open source system is robust.

    So, you have to weigh the potential risks against the benefits. It's probably

    not a good idea to use an open source system for a mission-critical

    application.

    The easy road is to avoid open source systems altogether. If, however,

    the benefits outweigh the risks you can probably expect some very critical

    scrutiny. Thus, your validation efforts will need to be extremely robust. Since

    the system is open source, you should plan on capturing the source code and

    all of the tools to build the system into the configuration management

    system. Testing should be very detailed and should include tests such as

    misuse, exploratory, and dynamic analysis.

    62

    Open Source Systems

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    71/102

    Spreadsheets are a sticky issue. If you do any research, you'll see

    assertions such as, You can't validate a spreadsheet. The truth is, you can't

    validate the spreadsheet software (e.g Excel), but you can validate a

    spreadsheet. It's done quite frequently. But you need to take a rigid stance

    on usage. You must:

    Validate all formulas, macros, and data validation items;

    Lock down (protect) the spreadsheet to prevent changes (only data

    entry fields should be open for editing);

    Set up data entry fields to validate the data entered (numeric values

    should be bound, and so on).

    There are arguments that a spreadsheet can be used without validation

    if the final results are printed out (and signed). This would constitute the

    typewriter rule for electronic records. However, when a spreadsheet uses

    formulas, the only way for this approach to be valid would be to manually

    verify each calculation. This would bypass the use of the spreadsheet, so

    that doesn't seem to be a viable approach. Thus we recommend that even if

    you print and manually sign it, you must validate any formulas and macros.

    63

    Excel Spreadsheets

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    72/102

    It is a common myth that according to 21 CFR, Part 11 (US FDA

    regulation on Electronic Records/Electronic Signatures), Spreadsheets

    cannot be validated for e-signatures. To do so, add-on packages are

    required. Whilst it is accepted that Microsoft Excel doesn't faciliate

    electronic signatures (yet!), other spreadsheet packages may be able to do

    so.

    64

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    73/102

    If a system has not been validated and isn't in production use and

    validation is required, a retrospective validation exercise needs to be

    performed, which is no different from a normal validation exercise (except

    that the validation of the system isn't incorporated into this initial project

    delivery). The concern is what to do if anomalies are revealed during

    validation.

    This is a risk management exercise and is beyond the scope of this

    document. It is sufficient to say that any anomaly would need to be analyzed

    to determine if subsequent actions are warranted. Depending on the

    severity of the anomaly, actions such as customer notification or recall could

    be necessary. So, it's always better to validate before production use to

    minimize the likelihood of such drastic actions.

    65

    Retrospective

    Validation

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    74/102

    Software validation is a challenge with many aspects to considerfrom

    process to technical to administrative. This document should help with the

    process and administrative aspects. If you take a planned, responsible

    approach and your positions are defensible, you should be fine.

    Use validation efforts as a business asset. Doing validation for the sake

    of checking a box shows no commitment and will likely result in problems in

    the long run.

    Consultants can be an effective means to either kick-start validation

    efforts, bring in validation expertise as needed, or manage the entire

    validation effort. This book, hopefully, has provided you with sufficient

    understanding of the validation process so you can assess consultants for

    competency.

    In addition, this book is intended to be an affordable and easy to

    understand guide it is a guide. Also check your company procedures and

    relevant regulatory regulations to ensure that you are always both current

    and correct.

    66

    Summary

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    75/102

    Q: I have purchased Application XYZ but I only use capabilities 1, 3, and 7

    of the ten included. Do I have to validate the entire application?

    A: No, validate only what you use. Develop the requirements specification

    to define exactly what you use, as you use them. This forms the basis

    (and rationale) for what you actually validate. Should you begin using

    additional capabilities later, you will need to update the requirements

    specification and validate the new capabilities. You will also need to

    perform a risk analysis to determine what additional testing needs to be

    done (those validated capabilities that may interface or be influenced

    by the new capabilities requiring regression or other testing).

    Q: When do I start?

    A: Now. Whether you have a system that's been in operation for years, or

    whether you've started development of a new system, if your quality

    system requires validated systems, they need to be validated.

    Q: How do I start?

    A: The Validation Master Plan is the foundation. It will also help you

    determine (through risk analysis) what the next steps need to be.

    67

    Frequently

    Asked Questions

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    76/102

    Q: Do I have to validate Windows (or Linux, or)

    A: The operating system on which the application runs could well

    introduce errors or cause the application to function in an unexpected

    manner. But how would you validate an operating system? Generally, it

    should be sufficient to validate the application on the target operating

    system (remember the rule about testing in the intended environment).

    If the application is to run on multiple operating systems, it's good

    practice to qualify the installation on each operating system. Most

    companies have server qualification procedures that ensure that each

    build has been done consistently and is hosted on a specific (qualified)

    hardware platform.

    Q: Do I have to validate my network?

    A: A qualified network is expected when that network is hosting a

    validated system. Generally, a network need not be validated to ensure

    validation of a software application. There could be cases where

    network validation is necessary, but that's outside the scope of

    software validation.

    Q: What about hosted software?

    A: Hosted softwaresoftware resident on another company's equipment

    and execution is controlled via the internet or some other networking

    protocolis a relatively new idea. If the software meets criteria to

    require validation (affects quality, and so on), you are responsible for

    68

    An Easy to Understand Guide | Software Validation

  • 7/30/2019 Software Validation Book

    77/102

    validation. Typically, you as the client would establish contractual

    requirements for the hosting company to meet basic standards for life

    cycle management, change control, backup and recovery, and so on.

    You might perform validation testing yourself or you might contract

    with the hosting company to do so. In this situation, it's extremely

    important that the contract be set up to require the hosting company to

    notify you of any changes (to the software, the environment, and so on).

    And should changes occur, an assessment would be required to

    determine if re-validation is warranted. Many companies realize the

    potential exposure for regulatory violations and tend to avoid using

    hosted applications.

    Q: What about distributed applications? Is validation required on every

    installation?

    A: A distributed application is a system installed on multiple computers

    and networks in multiple locations. If you can prove that each install on

    each system is equivalent, then you can test on one system and verify a

    subset of the others. In general, you must prove the specification on

    each computer has the minimum requirements and the software on

    each is the same version. If that's not possible or is impractical, you may

    be able to show equivalence and minimize testing on other systems (for

    example, it may not be necessary to test the user interface on every