4
Safari Test Automation: Navigating Through the Jungle By Karan Kumar and James Chuong At Instart Logic, our focus is on delivering high performance, engaging web experiences to users across devices with our Software-Defined Application Delivery (SDAD) approach, with a particular emphasis on optimizing for mobile devices. To this end, it is important for us to be able to test and quantify the performance improvements in application delivery that our service brings to the various web platforms. With iOS being a dominant platform for e-commerce it is critical for us to optimize for this platform. In this post we’d like to focus on the particular challenges of Apple’s Safari browser on desktop and iOS, and our approach to automate testing of Safari and Mobile Safari, thus making it possible to find both functional and performance bottlenecks with websites on this key platform.

Safari Test Automation: Navigating Through the Jungle | Instart Logic

Embed Size (px)

Citation preview

Page 1: Safari Test Automation: Navigating Through the Jungle | Instart Logic

Safari Test Automation: Navigating

Through the Jungle

By Karan Kumar and James Chuong

At Instart Logic, our focus is on delivering high performance, engaging web experiences to

users across devices with our Software-Defined Application Delivery (SDAD) approach, with a

particular emphasis on optimizing for mobile devices. To this end, it is important for us to be

able to test and quantify the performance improvements in application delivery that our

service brings to the various web platforms.

With iOS being a dominant platform for e-commerce it is critical for us to optimize for this

platform. In this post we’d like to focus on the particular challenges of Apple’s Safari browser

on desktop and iOS, and our approach to automate testing of Safari and Mobile Safari, thus

making it possible to find both functional and performance bottlenecks with websites on this

key platform.

Page 2: Safari Test Automation: Navigating Through the Jungle | Instart Logic

Safari Challenges

Unlike other browsers, Safari does not offer much to automate the browser itself. To this

effect it is important for us to identify issues listed in my colleague Rajaram Gaunker’s recent

blog post “Holiday Wish List for Browser Makers – Requirements for an Open Web” in an

automated and scalable way.

In terms of performance, Apple recently added navigation timings as part of Safari 8 for OS X

and iOS8, but unfortunately these were removed in iOS 8.1.1. This makes it difficult to

reliably measure performance on Safari. In addition, these issues generally tend to be non-

trivial and can affect a website’s Quality of Experience, both in terms of functionality and

performance.

To this effect, we have developed a way to automate testing of Safari and Mobile Safari,

enabling us to find both functional and performance bottlenecks with websites on the Safari

platform. Now more on our approach and testing methodology.

Our Approach: Architecture

Our automated testing setup is currently a simple client-server model. The client starts a test

and sends it to the server to run. In Selenium Grid terms, the client would be the user, the

server is the hub, and currently the hub is its own node.

Page 3: Safari Test Automation: Navigating Through the Jungle | Instart Logic

We selected Selenium as the core driver for our automation testing, since it is the leading

standard on several platforms, with many language bindings. Selenium provides the Safari

driver with a JavaScript extension that allows it to control the Safari browser.

It also provides the JSON Wire Protocol, which defines how implementations of the web

drivers should communicate with browsers. Thus, all web drivers will have the same API and

can easily drive any browser with the same code.

One such implementation is Appium, the open source project that uses Selenium bindings to

drive both Android and iOS applications, on real and simulated devices. In fact, as Selenium

deprecated its own iPhone driver, it recommended that users use Appium or iOS-driver as a

replacement. On the Appium side, there is an additional application being used — ios-webkit-

debug-proxy by Google.

We use Appium, which is a node.js server to get commands from Selenium. The node would

need to be able to start and manage the Appium server to ensure that it is running and ready

to drive the mobile device. Apple has JavaScript running through Instruments, a tool within

XCode. Appium has Ruby and Java bindings to translate the Selenium API into Webkit’s

Remote Debugging Protocol, and thus allows Selenium to automate iOS. Furthermore, the ios-

webkit-debug-proxy is used to translate Webkit’s Remote Debugging Protocol into Apple’s iOS

Webkit Debugging Protocol.

In our case, we are starting a test remotely with our test configuration. The server receives

this test configuration, applies the settings and runs the test. Using Selenium, it starts up the

browser and routes web traffic through it. For mobile, the service starts up Appium and ios-

webkit-debug-proxy, hooks onto Appium and starts Safari. Then, using Selenium to drive

Safari, the server navigates to the given URLs.

Measuring Performance

Selenium itself can only perform very basic functional tests by driving the browser, without

any way to measure performance. So in addition to Selenium, we use BrowserMob Proxy,

which creates a local proxy that can route browser traffic. The proxy allows us to shape the

network so that we can test under different network conditions. Furthermore, the proxy

allows us to capture the network traffic into HTTP Archive Report (HAR) files so that we can

see the time it takes to load elements on the page. It can be controlled via its RESTful APIs.

Now that we are driving the mobile device, we just need to route the traffic from the device

through the machine. Then, we can measure performance by routing that traffic through

Browsermob. The traffic is received by the service, and recorded by Browsermob Proxy. The

recorded data is written out as an HTTP Archive Report (HAR), and can be sent to a reporting

software for details and aggregation.

Page 4: Safari Test Automation: Navigating Through the Jungle | Instart Logic

Here’s a visual depiction of the automated testing:

Conclusion

We hope you’ve found our take-aways on how to launch both functional and performance

tests on Safari Desktop and Safari on iOS helpful. This has given us at Instart Logic a way to

conduct a large battery of tests and enable us to deliver the best possible Quality of

Experience on Safari with our SDAD platform.

If you have thoughts on this and other possible approaches, we encourage you to share your

comments with the community here.