Developer-Friendly Web Performance Testing in Continuous

Preview:

Citation preview

Developer-FriendlyWeb Performance Testing in Continuous Integration

Michael Klepikov, Google

Velocity Conference, London, 2013

● Web performance testing is hard

● Sidelines of web app lifecycle

● Move to mainstream

● Integrate with main UI continuous build

● Make fixes cheaper

Overview

● Many awesome perf test tools

○ WebPageTest.org

● Domain of few experts

● High maintenance

● Hard to add to regular continuous build

Perf testing on sidelines

Why hard to integrate?● Focus on browser only

○ No control of server start/stop

○ CB must run at a specific changelist

● Own test scheduler (e.g. WebPageTest)

○ Impedance mismatch with main CB scheduler

● Complex end to end system

○ Regression could be anywhere, not just browser

● Hard to find the culprit change

○ 100’s-1000’s of developer changes + cherrypicks

● Production release logistics

Expensive to fix production

Dedicated Perf Tests RotUI evolves

Tests break

Who fill fix?

● Pervasive adoption in web UI tests

● All major browsers, desktop and mobile○ W3C standard

● No direct support for perf testing○ Bad idea to measure around Selenium/WD API calls

Selenium 2.0 - WebDriver

Real User Monitoring● Self-instrumentation + reporting in JS

● Arguably more important than perf testing

● Track metrics that matter

● Interactive pages (page-load meaningless)

● Easier to attribute than page-load time

● UI tests trigger RUM

● Intercept metrics

○ and DevTools traces

● UI tests stay green○ keep perf tests green

Perf tests that ride on UI tests

Collect DevTools traces● Selenium Logging API: LogType.PERFORMANCE

● Save in test results

● Compare before vs. after traces

○ Great aid for debugging regressions

● Test infrastructure does it transparently

● Parse from the saved DevTools trace

○ …/report?a=10,b=220

● Store to a database

○ Time Series of Time Series: changelists, iterations

● Graph, autodetect regressions○ TSViewDB

Intercept RUM metrics

● Run many iterations of same UI test

○ At the same changelist

● Statistically viable

○ Keep in mind – the distribution is not normal!

● Test infrastructure does it transparently

Iterations

Graphs !

Detect regressions

Drill down

RUM metrics from UI tests

● Don’t schedule tests on WPT

● Send perf test results to WPT server

○ Awesome UI, lots of useful analysis for free

● Link from main test results UI to WPT page

● TSViewDB integration, with drill-down

WebPageTest: results UI only

Performance Test Lifecycle Add Instrumentation

Reportingto your (web) app

Continuous Performance Tests

Real User Monitoring

Push to Production

Regression AlertsFix

Perf tests run longer: two CBs

Main CB Perf CBgreen builds

bisect the culprit

Caveats● Do not predict absolute production latency!

○ Ideally server under test runs in isolation in CB

○ Limited variety of hardware, networks

● Only detect regressions

○ E.g. daily regression summary email

Conclusions● Use UI functional tests as a base

● Intercept RUM, run many iterations

● Run continuously!

● Autodetect regressions, CL author to fix!

● Debug production less!

These slides: goo.gl/HdUCqL

Intercept DevTools: youtu.be/0_kAPWSZNY4

Send results to WPT: gist.github.com/klepikov

TSViewDB: github.com/google/tsviewdb

Q&A

Recommended