16
OpenDaylight Performance Stress Test Report v1.2: Lithium vs Beryllium Intracom Telecom SDN/NFV Lab www.intracom-telecom.com | intracom-telecom-sdn.github.com | [email protected]

INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600

OpenDaylight Performance Stress Test Reportv1.2: Lithium vs Beryllium

Intracom Telecom SDN/NFV Lab

www.intracom-telecom.com | intracom-telecom-sdn.github.com | [email protected]

Page 2: INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600
Page 3: INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600

Intracom Telecom | SDN/NFV Lab G 2016

OpenDaylight Performance Stress Tests Reportv1.2: Lithium vs Beryllium

Executive SummaryIn this report we investigate several performance and scalability aspects of the OpenDaylight Beryllium controller and com-pare them against the Lithium release. Beryllium outperforms Lithium in idle switch scalability tests both with Multinet (real-istic OF1.3) andMT–Cbench (artificial OF1.0) emulators. A single Beryllium instance canboot ”Linear” and ”Disconnected” idleMultinet topologies of up to 6400 switches, as opposed to 4800 with Lithium. In general, Berylliummanages to successfullyboot topologies that Lithium fails to, or it can boot them faster. In MT–Cbench tests the superiority of Beryllium is evenmoreevident, being able to successfully discover topologies that expose themselves to the controller at a faster rate as comparedto Lithium. With regard to controller stability, both releases exhibit similarly stable behavior. In addition, Multinet–basedtests reveal traffic optimizations in Beryllium, witnessed by the reduced volume related to MULTIPART messages. Finally, inNorthBound performance tests Beryllium seems to underperform in accepting flow installation requests via RESTCONF, butas far as the end–to–end flow installation is concerned it is the clear winner. In many extreme cases of large topologies (e.g.5000 switches) or large flow counts (1M), Beryllium manages to successfully install flows while Lithium fails.

Contents

1 Introduction 3

2 NSTAT Toolkit 4

3 Experimental setup 4

4 Switch scalability stress tests 54.1 Idle Multinet switches . . . . . . . . . . . . . . . . . . . . 6

4.1.1 Test configuration . . . . . . . . . . . . . . . . . 74.1.2 Results . . . . . . . . . . . . . . . . . . . . . . . 7

4.2 Idle MT–Cbench switches . . . . . . . . . . . . . . . . . . 74.2.1 Test configuration . . . . . . . . . . . . . . . . . 84.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . 8

4.3 Active Multinet switches . . . . . . . . . . . . . . . . . . . 84.3.1 Test configuration . . . . . . . . . . . . . . . . . 84.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . 9

4.4 Active MT–Cbench switches . . . . . . . . . . . . . . . . . 94.4.1 Test configuration, ”RPC” mode . . . . . . . . . . 94.4.2 Test configuration, ”DataStore” mode . . . . . . 94.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . 9

4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . 94.5.1 Idle scalability tests . . . . . . . . . . . . . . . . 94.5.2 Active scalability . . . . . . . . . . . . . . . . . . 10

5 Stability tests 105.1 Idle Multinet switches . . . . . . . . . . . . . . . . . . . . 10

5.1.1 Test configuration . . . . . . . . . . . . . . . . . 115.1.2 Results . . . . . . . . . . . . . . . . . . . . . . . 11

5.2 Active MT–Cbench switches . . . . . . . . . . . . . . . . . 115.2.1 Test configuration, ”DataStore” mode . . . . . . 135.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . 135.2.3 Test configuration, ”RPC” mode . . . . . . . . . . 135.2.4 Results . . . . . . . . . . . . . . . . . . . . . . . 13

5.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . 135.3.1 Idle stability tests . . . . . . . . . . . . . . . . . 135.3.2 Active stability tests . . . . . . . . . . . . . . . . 13

6 Flow scalability tests 136.1 Test configuration . . . . . . . . . . . . . . . . . . . . . . . 156.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

6.2.1 Add controller time/rate . . . . . . . . . . . . . 166.2.2 End–to–end flows installation controller time/rate 16

6.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . 16

7 Contributors 16

References 16

1. Introduction

In this report we investigate several performance and scala-bility aspects of the OpenDaylight Beryllium controller andcompare them against the Lithium release. The investigationfocuses on both the SouthBound (SB) and NorthBound (NB)controller’s interface and targets the following objectives:

• controller throughput,• switch scalability,• controller stability (sustained throughput),• flow scalability and flow provisioning time

For our evaluationwe have usedNSTAT [1], an open source en-vironment written in Python for easily writing SDN controllerstress tests and executing them in a fully automated and end–to–end manner. NSTAT’s architecture is exhibited in Fig.1 andits key components are the

• SDN controller,• SouthBound OpenFlow traffic generator,• NorthBound flow generator (RESTCONF traffic).

The SDN controller used in this series of experiments is Open-Daylight. We perform and demonstrate results from the Lithi-um SR3 and Beryllium RC2 releases.

OpenDaylight Performance Stress Tests Reportv1.2: Lithium vs Beryllium

Date:May 20, 2016 G p.3/16

Page 4: INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600

NSTAT

lifecyclemanagement

reporting

sampling

orchestration

testdimensioning

monitoring

NB generator

controller

SB generator

confi

gtraffic

OFtraffi

c

Fig. 1: NSTAT architecture

For SouthBound (SB) traffic generationwehaveusedbothMT–Cbench [2] and Multinet [3]. For NorthBound (NB) traffic gen-eration NSTAT uses custom scripts [4], originally developedby the OpenDaylight community [5], for creating and writingflows to the controller configuration data store.

MT–Cbench is a direct extension of the Cbench [6] emulatorwhich uses threading to generate OpenFlow traffic from mul-tiple streams in parallel. Themotivation behind this extensionwas to be able to boot–up and operate network topologiesmuch larger than thosewith the original Cbench, by graduallyadding switches in groups. We note here that, as in the origi-nal Cbench, the MT–Cbench switches implement only a mini-mal subset of theOF1.0 protocol, and therefore the results pre-sented for that generator are expected to vary in real–worlddeployments.

This gap is largely filled using Multinet [7] as an additional SBtraffic generator, as it uses OVS virtual switches that accuratelyemulate the OF1.3 protocol. The goal of Multinet is to providea fast, controlled and resource efficient way to boot large–scale SDN topologies. This is achieved by booting in paral-lel multiple isolated Mininet topologies, each launched on aseparate virtual machine, and all connected to the same con-troller1. Finally, by providing control on the way that switches

1As an example, Multinet can create more than 3000 Openvswitch OF1.3switches over moderate virtual resources (10 vCPUs,<40GB memory) in lessthan 10 minutes.

are being connected to the SDN controller, one can easily testvarious switch group size/delay combinations and discoverconfigurations that yield optimal boot–up times for a large–scale topology.

In our stress tests we have experimented with emulated swi-tches operating in two modes, idle and active mode: swit-ches in idle mode do not initiate any traffic to the controller,but rather respond to messages sent by it. Switches in activemode consistently initiate traffic to the controller, in the formof PACKET_IN messages. In most stress tests, MT–Cbench andMultinet switches operate both in active and idle modes.

2. NSTAT Toolkit

The general architecture of NSTAT is depicted in Fig.1. TheNSTAT node lies at the heart of the environment and controlsall others. The test nodes --controller node, SB node(s), NBnode-- are mapped on one or more physical or virtual inter-connectedmachines. Unless otherwise stated, in all our exper-iments every node is mapped on a separate virtual machine.

NSTAT is responsible for automating and sequencing everystep of the testing procedure, including building and deploy-ing components on their corresponding nodes, initiating andscaling SB and NB traffic, monitoring controller operation interms of correctness and performance, and producing perfor-mance reports alongwith system and application health stateand profiling statistics. Each of these steps is highly config-urable using a rich and simple JSON–based configuration sys-tem.

Each stress test scenario features a rich set of parameters thatdetermine the interaction of the SB and NB components withthe controller, and subsequently trigger varying controller be-havior. Such parameters are for example the number of Open-Flow switches, the way they connect to the controller, thenumber ofNB application instances, the rate ofNB/SB traffic etc.,and in a sense these define the ”dimensions” of a stress testscenario.

One of NSTAT’s key features is that it allows the user to specifymultiple values for one or more such dimensions at the sametime, and the tool itself takes care to repeat the test over alltheir combinations in a single session. This kind of exhaus-tively exploring themulti–dimensional experimental space of-fers a comprehensive overview of the controller’s behavior ona wide range of conditions, making it possible to easily dis-cover scaling trends, performance bounds and pathologicalcases.

3. Experimental setup

Details of the experimental setup are presented in Table 1.

As mentioned in section 1, in our stress tests we have experi-mented with emulated switches operating in idle and active

OpenDaylight Performance Stress Tests Reportv1.2: Lithium vs Beryllium

Date:May 20, 2016 G p.4/16

Page 5: INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600

SDN controller

Mininetswitch 1

...Mininetswitch 2

Mininetswitch 3

* the vast majority of packets

OF1.0/1.3 traffic

ECHO_REPLYs, MULTIPART_REPLYs∗

ECHO_REQUESTs, M

ULTIPART_REQUESTS∗

(a) Multinet switches sit idle during main operation. ECHO and MULTIPART mes-sages dominate the traffic being exchanged between the controller and theswitches.

SDN controller

MT–Cbenchswitch 1

...MT–Cbenchswitch 2

MT–Cbenchswitch 3

OF1.0

traffic∗

*, ** a few messages during initialization, idle during main execution.

OF1.0traffic∗∗

(b) MT–Cbench switches sit idle during main operation, without sending or re-ceiving any kind of messages.

Fig. 2: Representation of a switch scalability test with idle Multinet and MT-Cbench switches.

Table 1: Stress tests experimental setup.

Host operating system Centos 7, kernel 3.10.0Hypervisor QEMU v.2.0.0

Guest operating system Ubuntu 14.04Server platform Dell R720Processor model Intel Xeon CPU E5–2640 v2 @ 2.00GHzTotal system CPUs 32CPUs configuration 2 sockets× 8 cores/socket× 2 HW-threads/core @ 2.00GHz

Main memory 256 GB, 1.6 GHz RDIMMSDN Controllers under test OpenDaylight Beryllium (RC2), OpenDaylight Lithium (SR3)OpenDaylight configuration OF plugin implementation: ”Helium design”

Controller JVM options -Xmx8G, -Xms8G, -XX:+UseG1GC, -XX:MaxPermSize=8GMultinet OVS version 2.0.2

mode using bothMT–Cbench andMultinet for emulating thenetwork topology.

In MT–Cbench tests the controller is configured to start withthe ”drop–test” feature installed. The emulated switches arearranged in a disconnected topology, meaning they do nothave any interconnection between them. As we have alreadymentioned, this feature, along with the limited protocol sup-port, constituteMT–Cbench a special–purposeOpenFlowgen-erator and not a full–fledged, realistic OpenFlow switch emu-lator.

Two different configurations of the controller are evaluatedwithMT–Cbench active switches: in ”RPC”mode, the controlleris configured to directly reply to PACKET_IN’s sent by the swi-tcheswith apredefinedmessage at theOpenFlowplugin level.In ”DataStore” mode, the controller additionally performs up-dates in its data store. In all cases, MT–Cbench is configured tooperate in ”Latency” mode, meaning that each switch sendsa PACKET_IN message only after it has received a reply for theprevious one.

In all stress tests the ”Helium–design” implementation of theOpenFlow plugin was used. In future versions of this report

we will evaluate the new implementation, codenamed ”Lithi-um–design”.

4. Switch scalability stress tests

Switch scalability tests aimat exploring themaximumnumberof switches the controller can sustain, when switches are be-ing gradually added either in idle or active mode. Apart fromthe maximum number of switches the controller can success-fully see, a few more additional metrics are reported:

• in idlemode tests, NSTAT also reports the topologyboot–up time for different policies of connecting switches tothe controller. From these results we can deduce howto optimally boot–up a certain–sized topology and con-nect it to the controller, so that the latter can success-fully discover it at the minimum time.

• in active mode tests, NSTAT also reports the controllerthroughput, i.e. the rate at which the controller repliesback to the switch–initiatedmessages. Therefore, in thiscase, we alsoget anoverviewof howcontroller through-put scales as the topology size scales.

OpenDaylight Performance Stress Tests Reportv1.2: Lithium vs Beryllium

Date:May 20, 2016 G p.5/16

Page 6: INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600

Beryllium (RC2)

Lithium (SR3)

103 104number of network switches [N]

520

1036

1578

2348

−1−1

bootup

time[s]

1600

3200

4800

6400

80009600

(a) Group size: 1, topology type: Linear

Beryllium (RC2)

Lithium (SR3)

103 104number of network switches [N]

122

347

858

−1−1−1

bootup

time[s]

1600

3200

4800

6400 8000 9600

(b) Group size: 5, topology type: Linear

Fig. 3: Switch scalability stress test results for idle Multinet switches. Network topology size from 1600→9600 switches. Topology type: Linear. Boot–up timeis forced to -1 when switch discovery fails.

Beryllium (RC2)

Lithium (SR3)

103 104number of network switches [N]

519

1027

1540

2220

−1−1

bootup

time[s]

1600

3200

4800

6400

80009600

(a) Group size: 1, topology type: Disconnected.

Beryllium (RC2)

Lithium (SR3)

103 104number of network switches [N]

121

321

720

1457

−1−1

bootup

time[s]

1600

3200

4800

6400

8000 9600

(b) Group size: 5, topology type: Disconnected.

Fig. 4: Switch scalability stress test results for idle Multinet switches. Network size scales from 1600→9600 switches. Topology type: Disconnected. Boot–uptime is forced to -1 when switch discovery fails.

4.1 Idle Multinet switches

This is a switch scalability test with idle switches emulated us-ing Multinet, Fig.2(a). The objective of this test is twofold:

• to find the largest number of idle switches the controllercan accept and maintain.

• to find the combination of boot–up–related configura-tion options that leads to the fastest successful networkboot–up.

Specifically, these options are the group size at which switchesare being connected, and the delay between each group. Weconsider a boot–up as successful when all network switcheshave become visible in the operational data store of the con-

troller. If not all switches have been reflected in the data storewithin a certain deadline after the last update, we declare theboot–up as failed. NSTAT reports the number of switchesfinally discovered by the controller, and the discovery time.

Duringmain test executionMultinet switches respond to ECHOand MULTIPART messages sent by the controller at regular in-tervals. These types ofmessages dominate the total traffic vol-ume during execution. We evaluate two different topologytypes, ”Linear” and ”Disconnected”.

In order to push the controller performance to its limits, thecontroller node is executed on bare metal. Multinet is exe-cutedwithin a set of virtualmachines, seemaster/worker func-tionality of Multinet, [8].

OpenDaylight Performance Stress Tests Reportv1.2: Lithium vs Beryllium

Date:May 20, 2016 G p.6/16

Page 7: INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600

102 103number of network switches [N]

23−1

5

10

16

27

32

41

bootup

time[s]

group delay: 0.5sec

50 100200

400

800

1000

1500

2000

2500

3500

4000

5000

3000

Beryllium (RC2)

Lithium (SR3)

(a) group delay: 0.5s.

102 103number of network switches [N]

22481721

31

41

51

61

71

81

91

101

bootup

time[s]

group delay: 1sec

50 100 200400

8001000

1500

2000

25003000

3500

40004500

5000

Beryllium (RC2)

Lithium (SR3)

(b) group delay: 1s.

Fig. 5: Switch scalability stress test results with idle MT–Cbench switches. Topology size scales from 50→ 5000 switches.

102 103number of network switches [N]

8163264129161

241

321

401

481

561

641

721

801

bootup

time[s]

group delay: 8sec

50 100 200

8001000

1500

400

2000

2500

3000

3500

4000

4500

5000Beryllium (RC2)

Lithium (SR3)

(a) group delay: 8s.

102 103number of network switches [N]

163264128257321

481

641

801

961

1121

1281

1441

1601bo

otup

time[s]

group delay: 16sec

50 100 200400

8001000

1500

2000

2500

3000

3500

4000

4500

5000Beryllium (RC2)

Lithium (SR3)

(b) group delay: 16s.

Fig. 6: Switch scalability stress test results with idle MT–Cbench switches. Topology size scales from 50→ 5000 switches.

4.1.1 Test configuration

• topology types: ”Linear”, ”Disconnected”• topology size per worker node: 100, 200, 300, 400, 500,600

• number of worker nodes: 16• group size: 1, 5• group delay: 5000ms• persistence: enabled

In this case, the total topology size is equal to 1600, 3200, 4800,6400, 8000, 9600.

4.1.2 Results

Results from this series of tests are presented in Fig.3(a), 3(b)for a ”Linear” topology and Fig.4(a), 4(b) for a ”Disconnected”

topology respectively.

4.2 Idle MT–Cbench switches

This is a switch scalability test with idle switches emulated us-ingMT–Cbench, Fig.2(b). As in theMultinet case, the goal is toexplore the maximum number of idle switches the controllercan sustain, and how a certain–sized topology should be op-timally connected to it.

In contrast to idle Multinet switches that exchange ECHO andMULTIPARTmessageswith the controller, MT–Cbench switchestypically sit idle duringmain operation, without sending or re-ceiving any kind of messages. Due to this fact, the ability ofthe controller to accept and maintain MT–Cbench switches isexpected to be much larger than the case of using realisticOpenFlow switches.

OpenDaylight Performance Stress Tests Reportv1.2: Lithium vs Beryllium

Date:May 20, 2016 G p.7/16

Page 8: INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600

SDN controller

Mininetswitch 1

...Mininetswitch 2

Mininetswitch 3

* the vast majority of packets

OF1.0/1.3 traffic

PACKET_Ins∗, ECHO_REPLYs

MULTIPART_REPLYs

FLOW

_MODs∗, ECHO_REQUESTs

MULTIPART_REQUESTS

(a) The switches send OF1.3 PACKET_IN messages to the controller, whichreplies with OF1.3 FLOW_MODs.

SDN controller

MT-Cbenchswitch 1

...MT-Cbenchswitch 2

MT-Cbenchswitch 3

OF1.0

traffic

artificialPACKET_IN

s∗

* the vast majority of packets

artificialFLOW

_MODs∗

(b) The switches send artificial OF1.0 PACKET_IN messages to the controller,which replies with also artificial OF1.0 FLOW_MODmessages.

Fig. 7: Representation of switch scalability stress test with active (a) Multinet and (b) MT–Cbench switches.

[Mbytes/s]

number of network switches [N]10 2 10 3 10 45

10

15

20

25

30

35

192|9.98 400|10.02 800|10.11 1600|10.363200|11.09

4800|13.02

192|23.20 400|23.48 800|23.931600|24.75

3200|27.15

4800|32.19Beryllium (RC2)

Lithium (SR3)

Fig. 8: Switch scalability stress test resultswith activeMultinet switches. Com-parison analysis of controller throughput variation Mbytes/s vs num-ber of network switches N. Topology size scales from 192 → 4800switches.

In order to push the controller performance to its limits, all testnodes (controller, MT–Cbench) were executed on bare metal.To isolate the nodes from each other, the CPU shares featureof NSTAT was used, [9].

4.2.1 Test configuration

• controller: ”RPC” mode• generator: MT–Cbench, latency mode• number of MT–Cbench threads: 1, 2, 4, 8, 16, 20, 30,40, 50, 60, 70 , 80, 90, 100 threads.

• topology size per MT–Cbench thread: 50 switches• group delay: 500, 1000, 2000, 4000, 8000,16000.

• persistence: enabled

In this case switch topology size is equal to: 50, 100, 200, 300, 400,800, 1000, 1500 2000, 2500, 3000, 3500, 4000, 4500, 5000.

4.2.2 Results

Results of this test are presented in Fig.5(a), 5(b), 6(a), 6(b).

4.3 Active Multinet switches

This is a switch scalability test with switches in active modeemulated using Multinet, Fig.7(a).

Its target is to explore the maximum number of switches thecontroller can sustain while they consistently initiate traffic toit (active), and how the controller servicing throughput scalesas more switches are being added.

The switches start sending messages after the entire topol-ogy has been connected to the controller. They trigger OF1.3PACKET_INmessages for a certain interval and at a configurablerate, and the controller replieswithOF1.3 FLOW_MODs. Thesemessage types dominate the traffic exchanged between theswitches and the controller. Our target metric is the through-put of the controller’s outgoing OpenFlow traffic. NSTAT usesthe oftraf [10] tool to measure it.

In order to push the controller performance to its limits, thecontroller is executed on the baremetal andMultinet on a setof interconnected virtual machines.

4.3.1 Test configuration

• controller: with l2switch plugin installed, configured torespondwithmac–to–mac FLOW_MODs to PACKET_INmessages with ARP payload [11]

• topology size per worker node: 12, 25, 50, 100, 200, 300• number of worker nodes: 16• group size: 1• group delay: 3000ms

OpenDaylight Performance Stress Tests Reportv1.2: Lithium vs Beryllium

Date:May 20, 2016 G p.8/16

Page 9: INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600

number of network switches [N]

throug

hput

[respon

ses/s]

10 2 10 360000

65000

70000

75000

80000

85000

90000

95000

100000

105000

Beryllium (RC2) [mean]

Lithium SR3 [mean]

Beryllium (RC2) [min]

Lithium SR3 [min]

Beryllium (RC2) [max]

Lithium SR3 [max]

50

100200

400

800 1000

2000 30004000

5000

(a) OpenDaylight running in RPC mode.

number of network switches [N]

throug

hput

[respon

ses/s]

10 2 10 30

50

100

150

200

250

300

Beryllium (RC2)

Lithium (SR3)

19.475|5011.075|50

15.735|100

15.325|100

39.85|200

19.075|200

46.95|400

36.75|400

57.27|800

66.65|80067.72|1000

74.45|1000

113.925|2000

124.725|2000

161.9|3000

174.65|3000

203.25|4000

223.5|4000

253.9|5000

267.8|5000

(b) OpenDaylight running in DS mode.

Fig. 9: Switch scalability stress test results with active MT–Cbench switches. Comparison analysis of controller throughput variation [responses/s] vs number ofnetwork switches N with OpenDaylight running both on RPC and DataStore mode.

• topology type: ”Linear”• hosts per switch: 2• traffic generation interval: 60000ms• PACKET_IN transmission delay: 500ms• persistence: disabled

Switch topology size scales as follows: 192, 400, 800, 1600, 3200,4800.

4.3.2 Results

Results of this test are presented in Fig.8.

4.4 Active MT–Cbench switches

This test is equivalent to the active Multinet switch scalabilitytest described in section 4.3

The switches start sendingmessages after the entire topologyhas been connected to the controller. MT–Cbench switchessend artificial OF1.0 PACKET_INmessages to the controller, whichreplies with also artificial OF1.0 FLOW_MOD messages; theseare the dominant message types during execution. In orderto push the controller performance to its limits, all test nodes(controller, MT–Cbench) were executed on bare metal. To iso-late the nodes from each other, the CPU shares feature [9] ofNSTAT was used.

4.4.1 Test configuration, ”RPC” mode

• controller: ”RPC” mode• generator: MT–Cbench, latency mode• number of MT–Cbench threads: 1, 2, 4, 8, 16, 20, 40, 60,80, 100 threads,

• topology size per MT–Cbench thread: 50 switches

• group delay: 15s• trafficgeneration interval: 20s• persistence: enabled

In this case, the total topology size is equal to 50, 100, 200, 400,800, 1000, 2000, 3000, 4000, 5000.

4.4.2 Test configuration, ”DataStore” mode

• controller: ”DataStore” mode• generator: MT–Cbench, latency mode,• number of MT–Cbench threads: 1, 2, 4, 8, 16, 20, 40,60, 80, 100 threads,

• topology size per MT–Cbench thread: 50 switches• group delay: 15s• trafficgeneration interval: 20s• persistence: enabled

In this case, the total topology size is equal to 50, 100, 200, 300,400, 800, 1000, 2000, 3000, 4000, 5000.

4.4.3 Results

Results for test configurations defined in sections 4.4.1, 4.4.2are presented in Figs.9(a), 9(b) respectively.

4.5 Conclusions

4.5.1 Idle scalability tests

In idle Multinet switch scalability tests we can see that Beryl-lium controller has a significantly improved performance asit manages to handle larger switch topologies than Lithium.Beryllium can successfully boot ”Linear” topologies of up to6400 switches in groups of 1, and 4800 switches in groups

OpenDaylight Performance Stress Tests Reportv1.2: Lithium vs Beryllium

Date:May 20, 2016 G p.9/16

Page 10: INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600

SDN controller

Mininetswitch 1

Mininetswitch 2

Mininetswitch 3

* the vast majority of packets

OF1.0/1.3 traffi

c*

ECHO_REPLYs, MULTIPART_REPLYs*

ECHO_REQUESTs

MULTIPART_REQUESTS*

(a) Representation of switch stability stress test with idle Multinet switches.

SDN controller

MT–Cbenchswitch 1

MT-Cbenchswitch 2

MT–Cbenchswitch 3

* the vast majority of packets

OF1.0

traffic

artificialPACKET_IN

s∗

artificialFLOW

_MODs∗

(b) Representation of switch stability stress test with active MT–Cbenchswitches.

Fig. 10: Representation of switch stability stress testwith idleMultinet (a) and activeMT–Cbench switches (b). The controller accepts a standard rate of incomingtraffic and its response throughput is being sampled periodically.

of 5, while Lithium can boot such topologies for up to 4800switches regardless of thegroup size, Fig.3. For ”Disconnected”topologies Berylliummanages to boot topologies of 6400 swi-tches for both combinations of group size, Fig.4.

Thinking of the best boot–up times for a certain sized topol-ogy and for both topology types, we conclude that Berylliumcan either successfully boot a topology that Lithium cannot,or it can boot it faster.

We note here that the boot–up times demonstrated in ourtests are not necessarily thebest possible that canbe achievedfor a certain topology size /type; testingwith larger group sizesand/or smaller groupdelays could reveal further enhancements.

In idle MT–Cbench switch scalability tests the superiority ofBeryllium is even clearer. As we can see, Beryllium is muchmore successful in discovering switches that expose themse–lves to the controller at a faster rate (lower groupdelay, Figs.5(a)and5(b)). This performancedifferencedisappears as the switch-es are being added less aggressively (Figs.6(a) and 6(b)). TheBeryllium release can successfully discover almost themaxnum-ber of switches evenwith a group delay of 0.5 seconds. In con-trast, Lithium can achieve the samenumberswith a delay of atleast 8 seconds. Beryllium can generally boot large topologiesmuch faster than Lithium.

4.5.2 Active scalability

In Multinet–based tests Lithium exhibits significantly higherthroughput levels than Beryllium for every switch count. Nev-ertheless, this does not necessarily imply inferior performance,since, as we will show in section 5.3.1, traffic optimizationsmight have led to reduced volume overall. In any case, thisscenario needs further analysis to better understand the diffe-

rence, and will be a matter of future research.

In MT–Cbench based tests the performance gap between thetwo releases tends to close. In ”RPC” mode the throughput isalmost equivalent both for Lithium and Beryllium. In ”DataS-tore” mode there is also equivalence, with a slight superiorityof Lithium. In this case the throughput values aremuch lower,but this is expected as the controller data store is being inten-sively hit with the persistence module enabled. This adds tothe critical path of the packet processing pipeline.

5. Stability tests

Stability tests explore how controller throughput behaves ina large time window with a fixed topology connected to it,Fig.10(a), 10(b). Thegoal is todetect performance fluctuationsover time.

The controller accepts a standard rate of incoming traffic andits response throughput is being sampled periodically. NSTATreports these samples in a time series.

5.1 Idle Multinet switches

Thepurpose of this test is to investigate the stability of the con-troller to serve standard traffic requirements of a large scaleMultinet topology of idle switches.

Duringmain test executionMultinet switches respond to ECHOand MULTIPART messages sent by the controller at regular in-tervals. These types ofmessages dominate the total traffic vol-ume during execution.

NSTAT uses the oftraf [10] to measure the outgoing traffic ofthe controller. The metrics presented for this case are the

OpenDaylight Performance Stress Tests Reportv1.2: Lithium vs Beryllium

Date:May 20, 2016 G p.10/16

Page 11: INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600

0 1000 2000 3000 4000 5000repeat number [N]

0

5

10

15

20

25

30

35throug

hput

[respon

ses/sec]

0 1000 2000 3000 4000 5000repeat number [N]

0

5

10

15

20

25

30

35

throug

hput

[respon

ses/sec]

0 1000 2000 3000 4000 5000repeat number [N]

0

50

100

150

200

250

[Gbytes]

0 1000 2000 3000 4000 5000repeat number [N]

0

50

100

150

200

250

[Gbytes]

controller throughput [Beryllium (RC2)] controller throughput [Lithium (SR3)

system total used memory Gbytes, [Beryllium (RC2)] system total used memory Gbytes, [Lithium (RC2)]

Fig. 11: Controller stability stress test with idle MT–Cbench switches. Throughput and system total used memory comparison analysis between OpenDaylightLithium (SR3) and Beryllium (RC2) versions. OpenDaylight running in ”DataStore” mode.

OpenFlow packets and bytes collected by NSTAT every sec-ond, Fig.13.

In order to push the controller performance to its limits, thecontroller is executed on the baremetal andMultinet on a setof interconnected virtual machines

5.1.1 Test configuration

• topology size per worker node: 200• number of worker nodes: 16• group size: 1• group delay: 2000ms• topology type: ”Linear”• hosts per switch: 1• period between samples: 10s• number of samples: 4320• persistence: enabled• total running time: 12h

In this case switch topology size is equal to 3200.

5.1.2 Results

The results of this test are presented in Fig.13

5.2 Active MT–Cbench switches

In this series of experiments NSTAT uses a fixed topology ofactiveMT–Cbench switches to generate traffic for a large timewindow. The switches send artificial OF1.0 PACKET_IN mes-sages at a fixed rate to the controller, which replies with alsoartificial OF1.0 FLOW_MOD messages; these message typesdominate the traffic exchanged between the switches andthe controller. We evaluate the controller both in ”RPC” and”DataStore” modes.

In order to push the controller performance to its limits, all testnodes (controller, MT–Cbench) were executed on bare metal.To isolate the nodes from each other, the CPU shares feature

OpenDaylight Performance Stress Tests Reportv1.2: Lithium vs Beryllium

Date:May 20, 2016 G p.11/16

Page 12: INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600

0 1000 2000 3000 4000 5000repeat number [N]

80000

85000

90000

95000

100000

105000

throug

hput

[respon

ses/sec]

0 1000 2000 3000 4000 5000repeat number [N]

80000

85000

90000

95000

100000

105000

throug

hput

[respon

ses/sec]

0 1000 2000 3000 4000 5000repeat number [N]

0

50

100

150

200

250

[Gbytes]

0 1000 2000 3000 4000 5000repeat number [N]

0

50

100

150

200

250

[Gbytes]

controller throughput [Beryllium (RC2)] controller throughput [Lithium (SR3)

system total used memory Gbytes, [Beryllium (RC2)] system total used memory Gbytes, [Lithium (RC2)]

Fig. 12: Controller stability stress test with idle MT–Cbench switches. Throughput and system total used memory comparison analysis between OpenDaylightLithium (SR3) and Beryllium (RC2) versions. OpenDaylight running in ”RPC” mode.

0 1000 2000 3000 4000 5000repeat number [N]

0

500

1000

1500

2000

2500

3000

outgoing

packetspe

r[s]

outgoing packets per [s] Vs repeat numberBeryllium (RC2)Lithium SR3

(a) Comparison analysis for outgoing packets/s between OpenDaylightLithium (SR3) and Beryllium (RC2) releases.

0 1000 2000 3000 4000 5000repeat number [N]

0

50

100

150

200

[kbytes/s]

outgoing kbytes per [s] Vs repeat numberBeryllium (RC2)Lithium SR3

(b) Comparison analysis for outgoing kbytes/s between OpenDaylightLithium (SR3) and Beryllium (RC2) releases.

Fig. 13: Controller 12–hour stability stress test with idle Multinet switches.

OpenDaylight Performance Stress Tests Reportv1.2: Lithium vs Beryllium

Date:May 20, 2016 G p.12/16

Page 13: INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600

SDN controller

Mininetswitch 1

Mininetswitch 2

Mininetswitch 3

OF1.0/1.3 traffic

* the vast majority of packets

ECHO_REPLYs, MULTIPART_REPLYs∗

FLOW

_MODs, ECHO_REQUESTs∗

MULTIPART_REQUESTS*

NB app 1 NB app 2 NB app 3 ...

config traffic (REST)

Fig. 14: Representation of flow scalability stress test. An increasing numberof NB clients (NB appj , j=1,2,. . .) install flows on the switches of anunderlying OpenFlow switch topology.

of NSTAT was used [9].

5.2.1 Test configuration, ”DataStore” mode

• controller: ”DataStore” mode• generator: MT–Cbench, latency mode• number of MT-–Cbench threads: 10• topology size per MT–Cbench thread: 50 switches• group delay: 8s• number of samples: 4320• period between samples: 10s• persistence: enabled• total running time: 12h

In this case, the total topology size is equal to 500 switches.

5.2.2 Results

The results of this test are presented in Fig.11.

5.2.3 Test configuration, ”RPC” mode

• controller: ”RPC” mode• generator: MT–Cbench, latency mode• number of MT–Cbench threads: 10• topology size per MT–Cbench thread: 50 switches• group delay: 8s• number of samples: 4320,• period between samples: 10s• persistence: enabled• total running time: 12h

In this case, the total topology size is equal to 500 switches.

5.2.4 Results

The results of this test are presented in Fig.12.

5.3 Conclusions

5.3.1 Idle stability tests

From Fig.13 it is apparent that both releases exhibit stable be-havior in a time window of 12 hours.

A first observation is that Beryllium has lower idle traffic over-head compared to Lithium. To investigate the exact reasonsfor this discrepancy we analyzed the traffic during a sample1–min stability test with a Linear–switch Multinet topology.Both controllers were configured to request statistics every5000 ms. The traffic breakdown for both cases is depicted inTable 2. It is made clear that the reduced traffic in Beryllium

Table 2: OpenFlow traffic breakdown for a sample 1–min stability test with a3200–switch Multinet topology.

Lithium (SR3) Beryllium (RC2)

Packets Bytes Packets BytesIncoming traffic

OFPT_ECHO_REQUEST 6558 45906 6832 47824OFPT_PACKET_IN 4723 612772 4957 643468

OFPT_MULTIPART_REPLY 139528 8029791 4787 3402334Outgoing traffic

OFPT_PACKET_OUT 5037 643848 5307 678138OFPT_ECHO_REPLY 9135 63945 8901 62307

OFPT_MULTIPART_REQUEST 80040 4265880 4797 132531

is attributed to the reduction of MULTIPART messages, bothincoming and outgoing, possibly as a result of optimizationsimplemented in Beryllium in this area.

5.3.2 Active stability tests

Aswe can see in Figs.11, 12, both controllers exhibit in generalstable performance, at almost the same throughput levels.

Lithiumperforms slightly better than Beryllium in ”RPC”mode.In all four cases, a transient performance degradation is ob-served for a relatively largeperiod. After analyzing systemmet-rics we concluded that this behavior could be related to an in-crease in the total usedmemory of our system (see Figs.11, 12)as a result of possible memory leak.

6. Flow scalability tests

With flow scalability stress tests we try to investigate both ca-pacity and timing aspects of flows installation via the controllerNB (RESTCONF) interface, Fig.14.

This test uses the NorthBound flow generator [4] to createflows in a scalable and configurablemanner (number of flows,

OpenDaylight Performance Stress Tests Reportv1.2: Lithium vs Beryllium

Date:May 20, 2016 G p.13/16

Page 14: INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600

101 102 103 104number of network switches [N]

0

5

10

15

20addcontrollertim

e[s]

number of flows: N=10000

15|8.9230|9.54

15|7.87 30|7.72

60|9.01 525|9.11

60|7.75525|8.28

1050|9.71

4950|15.35

1050|8.45

4950|14.45

Beryllium (RC2)

Lithium (SR3)

(a) Flow operations: 104 .

101 102 103 104number of network switches [N]

0

200

400

600

800

1000

1200

1400

1600

1800

addcontrollerrate[Flows/s]

number of flows: N=10000

15|1271.17

15|1121.41

30|1295.37

30|1048.13

60|1289.55

60|1110.01

525|1207.38

525|1097.421050|1058.17

1050|1029.91

4950|692.25

4950|651.58

Beryllium (RC2)

Lithium (SR3)

(b) Flow operations: 104 .

Fig. 15: Flow scalability stress test result. Comparison performance for add controller time/rate vs number of switches for various numbers of flow operationsN. Add controller time is forced to -1 when test fails. Add controller time/rate vs number of switches for N=104 flow operations.

101 102 103 104number of network switches [N]

0

20

40

60

80

100

120

addcontrollertim

e[s]

number of flows: N=100000

15|69.46 30|71.34

15|61.84 30|61.65

60|70.81

525|75.45

60|62.43

525|68.48

1050|81.47

4950|113.78

1050|75.50

4950|111.23

Beryllium (RC2)

Lithium (SR3)

(a) Flow operations: 105 .

101 102 103 104number of network switches [N]

0

200

400

600

800

1000

1200

1400

1600

1800addcontrollerrate[Flows/s]

number of flows: N=100000

15|1616.96

15|1439.71

30|1621.99

30|1401.73

60|1601.91

60|1412.21

525|1460.27

525|1325.40

1050|1324.46

1050|1227.38

4950 | 899.02

4950|878.55

Beryllium (RC2)

Lithium (SR3)

(b) Flow operations: 105 .

Fig. 16: Flow scalability stress test result. Comparison performance for add controller time/rate vs number of switches for various numbers of flow operationsN. Add controller time is forced to -1 when test fails. Add controller time/rate vs number of switches for N=105 flow operations.

delaybetweenflows, number of flowwriter threads). The flowsare beingwritten to the controller configuration data store viaits NB interface, and then forwarded to an underlying Multi-net topology as flowmodifications, where they are distributedinto switches in a balanced fashion.

The test verifies the success or failure of flowoperations via thecontroller’s operational data store. An experiment is consid-ered successfulwhen all flowshavebeen installedon switchesand have been reflected in the operational data store of thecontroller. If not all of them have become visible in the datastore within a certain deadline after the last update, the exper-iment is considered failed.

Intuitively, this test emulates a scenario wheremultiple NB ap-

plications, each controlling a subset of the topology2, sendsimultaneously flows to their switches via the controller’s NBinterface at a configurable rate.

The metrics measured in this test are:

• Add controller time (tadd): the time needed for all re-quests to be sent and their response to be received (asin [5]).

• Add controller rate (radd): radd = N / tadd, where N theaggregate number of flows to be installed by workerthreads.

• End–to–end installation time (te2e): the time from the

2subsets are non-overlapping and equally sized

OpenDaylight Performance Stress Tests Reportv1.2: Lithium vs Beryllium

Date:May 20, 2016 G p.14/16

Page 15: INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600

number of flows: N=1000000

101 102 103 104number of network switches [N]

0

500

1000

1500

2000

addcontrollertim

e[s]

15|695.87 30|688.36

15|613.06 30|616.93

60|692.08525|775.11

60|629.92525|727.72

1050|843.74

4950|1179.97

1050|821.27

4950|2333.10Beryllium (RC2)

Lithium (SR3)

(a) Add controller time vs number of switches for N=106 flow operations.

number of flows: N=1000000

101 102 103 104number of network switches [N]

0

200

400

600

800

1000

1200

1400

1600

1800

addcontrollerrate[Flows/s]

15|1437.05

15|1631.16

30|1452.73

30|1620.9460|1587.51

60|1444.92 525|1374.15

525|1290.14

1050|1185.19

1050|1217.63

4950|428.61

4950|847.48

Beryllium (RC2)

Lithium (SR3)

(b) Add controller rate vs number of switches for N=106 flow operations.

Fig. 17: Flow scalability stress test result. Comparison performance for add controller time/rate vs number of switches for various numbers of flow operationsN. Add controller time is forced to -1 when test fails. Add controller time/rate vs number of switches for N=106 flow operations.

Beryllium (RC2)

Lithium (SR3)

101 102 103 104number of network switches [N]

0

100

200

300

400

500

600

700

800

end–

to–e

ndinstallatio

nrate

[Flows/s]

number of flows: N=1000

15|158.8430|143.88

15|111.0930|139.75

60|153.74

525|21.0260|97.19

525|16.80

1050|8.62 4950|-1.00

1050|8.62 4950|-1.00

(a) Flow operations: 103 .

Beryllium (RC2)

Lithium (SR3)

101 102 103 104number of network switches [N]

0

100

200

300

400

500

600

700

800en

d–to–e

ndinstallatio

nrate

[Flows/s]

number of flows: N=10000

15|617.12

30|718.00

15|483.38

30|611.44

60|611.79

525|192.86

60|457.90

525|120.67 1050|77.56

4950|2.51050|55.25

4950|-1

(b) Flow operations: 104 .

Fig. 18: Flow scalability stress test result. Comparison performance for end–to–end installation rate vs number of switches for various numbers of flow opera-tions N. End–to–end installation rate is forced to -1 when test fails.

first flow installation request until all flows have been in-stalled and become visible in the operational data store.

• End-to-end installation rate (re2e): re2e = N / te2e

In this test, Multinet switches operate in idle mode, withoutinitiating any traffic apart from theMULTIPART and ECHOmes-sages with which they reply to controller’s requests at regularintervals.

6.1 Test configuration

For both Lithium and Beryllium we used the following setup

• topology size per worker node: 1, 2, 4, 35, 70, 330.

• number of worker nodes: 15• group size: 1• group delay: 3000ms• topology type: ”Linear”• hosts per switch: 1• total flows to be added: 1K, 10K, 100K, 1M• flow creation delay: 0ms• flow worker threads: 5• persistence: disabled

In this case switch topology size is equal to: 15, 30, 60, 525,1050, 4950.

OpenDaylight Performance Stress Tests Reportv1.2: Lithium vs Beryllium

Date:May 20, 2016 G p.15/16

Page 16: INTRACOM TELECOM: SDN/NFV Lab: OpenDaylight Performance ... · Beryllium(RC2) Lithium(SR3) 103 104 numberofnetworkswitches[N] 520 1036 1578 2348 −1−1 [s] 1600 3200 4800 6400 80009600

Beryllium (RC2)

Lithium (SR3)

101 102 103 104number of network switches [N]

0

100

200

300

400

500

600

700

800en

d–to–e

ndinstallatio

nrate

[Flows/s]

number of flows: N=100000

15|147.72

15|-1

30|366.71

30|262.69

60|516.09

525|662.48

525|548.07

1050|411.62

1050|415.67

4950|28.09

4950|-1

60|641.27

(a) Flow operations: 105 .

Beryllium (RC2)

Lithium (SR3)

800

end–

to–e

ndinstallatio

nrate

[Flows/s]

101 102 103 104number of network switches [N]

0

100

200

300

400

500

600

700

number of flows: N=1000000

15|-1

15|-1

30|-1

30|-1

60|-1

60|-1

525|391.94

525|-1

1050|384.22

1050|-1

4950|-1

4950|-1

(b) Flow operations: 106 .

Fig. 19: Flow scalability stress test result. Comparison performance for end–to–end installation rate vs number of switches for various numbers of flow opera-tions N. End–to–end installation rate is forced to -1 when test fails.

6.2 Results

6.2.1 Add controller time/rate

The results of this test are presented in Figs.15, 16, 17.

6.2.2 End–to–end flows installation controller time/rate

The results of this experiment are presented in Figs.18, 19.

6.3 Conclusions

Regarding NB performance, as it is expressed in terms of addcontroller time/ rate, Lithium performs generally better thanBeryllium, but this trend tends to diminish and finally gets re-versed for large topologies. Notably, Be outperforms Li by alarge factor in the 1M flows/5K switches case.

However, regardingend–to–endperformance, Beryllium is theclear winner, outperforming Lithium in all cases of topologysizes and flow counts. In many extreme cases where Lithiumfails, Beryllium manages to successfully install flows. This hap-pens for example in the case for 1M flows, or in some5K switchtopologies. This witnesses improvements in the Beryllium re-lease relating to flow installation and discovery.

7. Contributors

• Nikos Anastopoulos• Panagiotis Georgiopoulos• Konstantinos Papadopoulos

References

[1] “NSTAT: Network Stress Test Automation Toolkit.” https://github.com/intracom-telecom-sdn/nstat.

[2] “MT–Cbench: A multithreaded version of the CbenchOpenFlow traffic generator.” https://github.com/intracom-telecom-sdn/mtcbench.

[3] “Multinet: Large–scale SDN emulator based on Mininet.”https://github.com/intracom-telecom-sdn/multinet.

[4] “NSTAT: NorthBound Flow Generator.” https://github.com/intracom-telecom-sdn/nstat/wiki/NorthBound-flow-generator.

[5] “OpenDaylight Performance: A practical, empiricalguide. End–to–End Scenarios for Common Usagein Large Carrier, Enterprise, and Research Networks.”https://www.opendaylight.org/sites/www.opendaylight.org/files/odl_wp_perftechreport_031516a.pdf.

[6] “Cbench: OpenFlow traffic generator.” https://github.com/andi-bigswitch/oflops/tree/master/cbench.

[7] “Mininet. An instant virtual network on your Laptop.” http://mininet.org/.

[8] “Multinet Features.” https://github.com/intracom-telecom-sdn/multinet#features.

[9] “Capping controller and emulator CPU resources incollocated tests.” https://github.com/intracom-telecom-sdn/nstat/wiki/Capping-controller-and-emulator-CPU-resources-in-collocated-tests.

[10] “OFTraf: pcap–based, RESTful OpenFlow traffic monitor.”https://github.com/intracom-telecom-sdn/oftraf.

[11] “Generate PACKET_IN events with ARP payload.”https://github.com/intracom-telecom-sdn/multinet#generate-packet_in-events-with-arp-payload.

OpenDaylight Performance Stress Tests Reportv1.2: Lithium vs Beryllium

Date:May 20, 2016 G p.16/16