Moving to Continuous Delivery Without Breaking Your Code

Preview:

Citation preview

Moving to Continuous Delivery Without Breaking Your Code

Rob Vanstone 22 October 2015

2 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Agenda

▪ The Need For Speed

▪ Testing is Changing

▪ A Central Hub for Application Quality For Your Pipeline

▪ Beyond Test Automation: Active Test Optimization

3 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

About Me

▪ Sales Engineer, Consultant and generally a nosy person

▪ Traversed through all phases of the software development lifecycle

▪Most time spent between Development and Operations (usually both)

▪ Really enjoying the energy in testing right now− Test specialists− Developers who test

− Testers who code

4 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

About XebiaLabs

We build tools to solve problems around DevOps and Continuous Delivery at scale

6 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

The Need For Speed

▪ Every business is an IT business− Known as “Software-defined Enterprise”, even traditionally brick and mortar business is becoming

software based

▪ Customers demand that you deliver new features faster whilst maintaining high levels of quality

▪ If you don’t, your competitor probably will

7 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

The Need For Speed

▪What is so compelling about CD?

▪ Business initiative with cool technical implementation

▪ “CD eats DevOps for breakfast as the business eats IT”

8 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

The Two Faces of CD

▪ A lot of focus right now is on pipeline execution

▪…but there’s no point delivering at light speed if everything starts breaking

▪ Testing (= quality/risk) needs to be a first-class citizen of your CD initiative!

10 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

The Two Faces of CD

▪ CD = Execution + Analysis

▪ = Speed + Quality

▪ = Pipeline orchestration + ..?

11 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

SPECIFY INTEGRATE RELEASEREGRESSION

VALUECHAINCONCEPT CASH

TestEffort

USERACCEPTANCEDESIGN BUILD TEST

Testing is Changing

12 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

SPECIFY INTEGRATE RELEASEREGRESSION

VALUECHAINCONCEPT CASH

TestEffort

USERACCEPTANCE

USERACCEPTANCE

AcceptanceDrivenTesting

“Iadd value by sharpening the

acceptance criteriaofrequested

features”

AutomateALL

“Testautomation servesasthesafety net

for my newfunctionality:Ifocusonrunning

theappropriate testscontinuously during

theiterations”

TEST

DESIGN

BUILD

D BT

TD

Development=TestTest=Development

“Testing istransforming to a

automation mindset and skill

instead ofaseparateactivity”

Testing is Changing

13 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Testing is Changing: Challenges

▪Many test tools for each of the test levels, but no single place to answer “Good enough to go live?”

▪ Requirements coverage is not available− “Did we test enough?”

▪Minimize the mean time to repair− Support for failure analysis

JUnit,FitNesse,JMeter,YSlow,VanityCheck,WireShark,SOAP-UI,

Jasmine,Karma,Speedtrace,Selenium,WebScarab,TTA,

DynaTrace,HPDiagnostics,ALMstackAppDynamics,CodeTesterforOracle,Arachnid,Fortify,Sonar,…

14 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Testing is Changing: Challenges

▪ Thousands of tests makes test sets hard to manage:− “Where is my subset?”

− “What tests add most value, what tests are superfluous?”

− “When to run what tests?”

▪ Running all tests all the time takes too long, feedback is too late

▪ Quality control of the tests themselves and maintenance of testware

▪ Tooling overstretch

15 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Testing is Changing: Challenges

16 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Testing is Changing: Best Practices

▪ Focus on functional coverage, not technical coverage

▪ Say 40 user stories, 400 tests− Do I have relatively more tests for the more important user stories?

− How do I link tests to user stories/features/fixes?

▪Metrics− Number of tests

− Number of tests that have not passed in <time>− Flaky tests

− Duration

17 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Testing is Changing: Best Practices

▪ “Slice and dice” your test code− Responsible team

− Topic

− Functional area− Flaky

− Known issue− etc.

▪ Radical parallelization− Fail faster!

18 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Making Sense of Test Results

▪ Real go/no go decisions are non-trivial− No failing tests

− 5 % of failing tests

− No regression (tests that currently fail but passed previously)− List of tests-that-should-not-fail

▪ Need historical context

▪ One integrated view

▪ Data to guide improvement

19 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Example Job Distribution

Build Deploy Int.Tests Test Test Test Perf.Tests

Build Deploy Int.Tests Test

Test

Test

Perf.Tests

20 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Example Job Distribution

Build Deploy Int.Tests Test Test Test Perf.Tests

Build Deploy Int.Tests Test

Test

Test

Perf.Tests

Simplepipelines– scatteredtestresults

21 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Making Sense of Test Results

Executing tests from Jenkins is great, but…

▪ Different testing jobs use different plugins or scripts, each with different visualization styles

▪ No consolidated historic view available across jobs

▪ Pass/Unstable/Fail is too coarse− How to do “Passed, but with known failures”?

23 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Making Sense of Test Results

▪ Ultimate analysis question (“are we good to go live?”) is difficult to answer

▪ No obvious solution for now, unless all your tests are running through one service

24 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Test Analysis: Homebrew

24

25 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Test Analysis: Custom Reporting

25

29 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

A Central Hub for Application Quality

What is needed:

1. A single, integrated overview of all the test (= quality, risk) information related to your current release

2. …irrespective of where or by whom the information was produced

3. The ability to analyze and “slice and dice” the test results for different audiences and use cases

4. The ability to access historical context and other test attributes to make real-world “go/no-go” decisions

30 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Beyond Test Automation

Can we go further? How about

5. The ability to use the aggregated test results, historical contexts and other attributes to invoke tests more intelligently?

31 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Beyond Test Automation

It’s a bit of an open question:

▪ Google: it’s too expensive and time-consuming to run all the tests all the time - automated selection of a subset of tests to run

▪ Dave Farley: if you can’t run all the tests all the time, you need to optimize your tests or you have the wrong tests in the first place

32 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Beyond Test Automation

Middle ground:

▪ Label your tests along all relevant dimensions to ensure that you can easily select a relevant subset of your tests if needed

▪ Consider automatically annotating tests related to features (e.g. added/modified in the same commit), or introducing that as a practice

▪ Use data from your test aggregation tool to ignore flaky/”known failure” tests (and then fix those flaky tests, of course ;-))

33 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Summary

▪ Testing = Automation− Testers are developers

▪ Structure and annotate tests

▪ Link to functions/features/use cases

▪ Radical parallelization− Throwaway environments

34 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Summary

▪ CD = Speed + Quality = Execution + Analysis

▪Making sense of scattered test results is still a challenge

▪ Need to figure out how to address real world go/no go decisions

35 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Summary

▪ CD = Speed + Quality = Execution + Analysis

▪Making sense of scattered test results is still a challenge

▪ Need to figure out how to address real world go/no go decisions

36 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Analyzing Test Results

37 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Analyzing Test Results

38 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Tagging Tests

39 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Evaluating Go/No-go Criteria

40 Copyright2014.Confidential– Distributionprohibitedwithoutpermission

Next steps

▪ Next-Generation Testing: The Key to Continuous Delivery

https://xebialabs.com/resources/whitepapers/next-generation-testing-the-key-to-continuous-delivery/

▪ An Introduction to XL TestView

https://www.youtube.com/watch?v=_17xKtB3iWU

▪ Download XL TestView

https://xebialabs.com/products/xl-testview/community

Thank you!Come see us at our stand

Recommended