18
Towards Self-Testing in Autonomic Computing Systems Tariq M. King, Djuradj Babich, Jonatan Alava, and Peter J. Clarke Software Testing Research Group Florida International University Ronald Stevens Department of Computing and Information Sciences Florida A & M University 03/21/2007 ISADS ‘07

Towards Self-Testing in Autonomic Computing Systems

Embed Size (px)

DESCRIPTION

Towards Self-Testing in Autonomic Computing Systems. Tariq M. King , Djuradj Babich, Jonatan Alava, and Peter J. Clarke Software Testing Research Group Florida International University. Ronald Stevens Department of Computing and Information Sciences Florida A & M University. ISADS ‘07. - PowerPoint PPT Presentation

Citation preview

Towards Self-Testing in Autonomic Computing Systems

Tariq M. King, Djuradj Babich,

Jonatan Alava, and Peter J. Clarke

Software Testing Research Group

Florida International University

Ronald Stevens

Department of Computing and

Information Sciences

Florida A & M University

03/21/2007

ISADS ‘07

2

Introduction Continual growth in size and complexity of

computing systems has led to a need for support tasks to be shifted from people to technology.

Movement towards self-managing systems that automatically configure, heal, protect, and optimize themselves – Autonomic Computing.

There is however a lack of techniques to dynamically validate such systems.

How can we be sure that AC systems behave correctly after a change is made at runtime?

3

Introduction (cont’d) This work presents a methodology that supports

automatic runtime validation of change requests resulting from self-management in AC systems.

Our methodology evaluates test adequacy through the use of a validation policy file.

We extend the current architecture of AC systems by applying concept of autonomic managers to testing activities.

We provide two strategies for dynamic validation based on system constraints.

4

Outline Background

Challenges

Overview of Testing Approach

Architecture of Test Managers

Prototype

Related Work

Conclusion & Future Work

5

Background – AC Systems

Source: IBM Autonomic Computing Architecture Team. An architectural blueprint for autonomic computing. IBM, Hawthorne, NY, June 2006.

6

Background – Software Testing Two broad categories:

Blackbox – specification-based, focuses on functionality, i.e., inputs → expected outputs.

Whitebox – implementation-based, focuses on whether or not the program has been thoroughly (adequately) exercised.

Regression Testing – determines whether or not modifications to software have introduced new errors into previously tested code. Retest-all – retest the entire test suite

Selective – only retest a strict subset

7

Background – Safe Adaptation Zhang et al. (WADS 2004)

Source: J. Zhang, B. H. C. Cheng, Z. Yang, and P. K. McKinley. Enabling safe dynamic component-based software adaptation. In WADS, pages 194–211, 2004.

8

Testing Approach Idea: Develop an implicit autonomic characteristic – Self-

Testing.

Incorporate self-test activities into Autonomic Managers (AMs) via test interfaces.

Test interfaces will allow interactions between AMs and Test Managers (TMs).

Two strategies: Safe Adaptation with Validation

Replication with Validation

Select based on cost, and feasibility.

9

Testing Approach (cont’d)

A High-Level Test Model for AC Systems

10

Test Managers (TMs) Extend the concept of autonomic managers to

testing activities, i.e., self-testing.

Responsible for: Performing regression testing

Dynamically generating test cases

Executing test cases and recording test results

Evaluating test results and coverage

Maintaining the test repository, i.e., test cases, test logs and validation policy.

11

TMs – Component Interactions

12

Prototype Autonomic Container – Data structure with

autonomic capabilities and implicit self-test. Focused on self-configuration – 80% full,

reconfigure by increasing capacity.

Implemented Replication with Validation strategy.

Validation policy requires 100% pass rate for test cases, and at least 75% for both branch and statement coverage.

Initial test suite created using boundary, random, and equivalence partitioning.

13

Prototype (cont’d) Simulated faulty change requests using a

mutation technique – GoodStack, BadStack.

Support Tools: JUnit, Cobertura.

Consists of three main packages: ACApplication

– External application that uses container

AutonomicContainer – Self-managing and self-testing container

SystemController – Coordinates overall activity.

14

Prototype – Top-Level Design

15

Prototype – Self-Test Manager

16

Related Work Making components self-testable, Le Troan

et al., TOOLS ’99. Include test cases and oracles in implementation.

Synthesizing assertions from observed behavior, Denaro et al., ACC ’05. Embeds assertions into the communication

infrastructure. Assertions are checked at runtime.

Formalizing behavior of adaptive programs, Zhang et al., WADS 04. Models adaptive and non-adaptive programs

17

Conclusion and Future Work Proposed a framework that dynamically

validate change requests in AC systems.

Approach extends the current structure of AC systems to include self-testing.

Supports two validation strategies.

Developed a prototype to show their feasibility.

Future work calls for: Extending the capabilities of the prototype Implementing safe adaptation with validation Evaluating efficiency of the two approaches

18

Thank You

Questions?

This work has been supported in part by the National Science Foundation under grant IIS-0552555