Upload
truonghanh
View
216
Download
0
Embed Size (px)
Citation preview
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
1 – SDN & NFV: Hoffnung oder Hype?
SDN & NFV: Hoffnung oder hype?
Martin Dräxler, Holger Karl, Matthias Keller, Sevil Mehraghdam, Arne Schwabe, Philip Wette University of Paderborn
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
2 – SDN & NFV: Hoffnung oder Hype?
■ Software-defined networking ■ Technical context ■ Issues ■ Research examples
■ Network function virtualization ■ Technical context ■ Research examples
Overview
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
3 – SDN & NFV: Hoffnung oder Hype?
■ A typical switch or router structure: two structures ■ High-performance data plane, with simple functionality ■ Complex decision logic
SDN technological context: Switches, routers
Hardware Datapath
Router
Software Control
Management: CLI, SNMP Routing Protocols: OSPF, ISIS, BGP
Per-packet: Lookup, switch, buffer
From [1]
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
4 – SDN & NFV: Hoffnung oder Hype?
■ In typical router/switch configurations, there is a control interface for the actual packet forwarding functionality
■ It is fairly simple ■ It talks in terms of ■ matching header fields and ■ simple actions to perform
Insight: Control interface exists
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
5 – SDN & NFV: Hoffnung oder Hype?
Simple network?
Million of lines of source code
7279 RFCs Barrier to entry
500M gates 10Gbytes RAM
Bloated Power Hungry
Many complex functions baked into the infrastructure OSPF, BGP, multicast, differentiated services, Traffic Engineering, NAT, firewalls, MPLS, redundant layers, …
Hardware Datapath
Router
Software Control
From [1]
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
6 – SDN & NFV: Hoffnung oder Hype?
■ Use the existing control interface to forwarding fabric ■ But pull out the decision logic from the switch/router ■ Many of those, distributed, no single view on network status
■ … and replace it by centralized instance
SDN core idea
Centralize control logic!
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
7 – SDN & NFV: Hoffnung oder Hype?
■ About what to talk to the brain? ■ When to talk to the brain? ■ How to structure the brain? ■ How many brains, where?
Issues
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
8 – SDN & NFV: Hoffnung oder Hype?
Types of action § Allow/deny flow § Route & re-route flow § Isolate flow § Make flow private § Remove flow
What is a flow? § Application flow § All http § All shuffle traffic of
one node § Jim’s traffic § All packets to Canada § …
About what to talk to the brain: Flows?!
From [1]
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
9 – SDN & NFV: Hoffnung oder Hype?
Packet-switching substrate in SDN
Payload Ethernet DA, SA, etc
IP DA, SA, etc
TCP DP, SP, etc
Collection of bits to plumb flows (of different granularities) between end points
From [2]
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
10 – SDN & NFV: Hoffnung oder Hype?
Popular SDN today: OpenFlow
Data Path (Hardware)
Control Path OpenFlow
OpenFlow Controller
OpenFlow Protocol (SSL/TCP)
From [2]
Switc
h
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
11 – SDN & NFV: Hoffnung oder Hype?
OpenFlow Protocol v1.0
Switch Port
MAC src
MAC dst
Eth type
VLAN ID
IP Src
IP Dst
IP Prot
TCP sport
TCP dport
Rule Action Stats
1. Forward packet to port(s) 2. Encapsulate and forward to controller 3. Drop packet 4. Send to normal processing pipeline
+ mask what fields to match
Packet + byte counters
From [2]
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
12 – SDN & NFV: Hoffnung oder Hype?
■ A switch ■ performs an exact match on some restricted header fields ■ and performs some simple actions: rewrite, forward, drop ■ decisions made at central point
■ We have seen that before ■ A switch ■ performs an exact match on a label ■ and performs some simple actions: rewrite, forward, drop, push/pop label ■ decisions made at central point
■ MPLS, GMPLS, …
■ Main differences ■ No need for label edge switches ■ Wildcarding
A detour: Rings a bell?
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
13 – SDN & NFV: Hoffnung oder Hype?
■ Compare three “standard architectures”: IP, MPLS, SDN ■ From the perspective of the main three interfaces that exist in any
network architecture: ■ Host-to-Network, Operator-to-Network, Packet-to-Switch
A detour: Three standard architectures
Host-to-Network
Packet-to-Switch
Operator-to-Network
IP Header fields in IP packet
Same None
MPLS Header fields in IP packet
Label somewhat (e.g., PCE)
SDN Depends Depends Main focus
Idea from [4], 4WARD papers
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
14 – SDN & NFV: Hoffnung oder Hype?
■ About what to talk to the brain? ■ SDN, OpenFlow: flows ■ Other schemes (e.g., PCE in MPLS): virtual circuits, …
■ When to talk to the brain? ■ How to structure the brain? ■ How many brains, where?
Issues
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
15 – SDN & NFV: Hoffnung oder Hype?
■ Whenever a non-matchable flow arrives at a switch ■ What else is the switch supposed to do?
■ Should this be the rule, or the exception?
■ The rule: This happens a lot ■ Brain needs to be extremely fast, very short latencies, … to react properly
■ The exception: Happens rarely ■ How to achieve? Reasonable defaults! ■ Preconfiguring network is imperative, with reasonable timeouts
When to talk to the brain?
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
16 – SDN & NFV: Hoffnung oder Hype?
■ About what to talk to the brain? ■ SDN, OpenFlow: flows ■ Other schemes (e.g., PCE in MPLS): virtual circuits, …
■ When to talk to the brain? ■ Proactively! Reactive only as fallback
■ How to structure the brain? ■ How many brains, where?
Issues
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
17 – SDN & NFV: Hoffnung oder Hype?
■ Goal: compute flow mods ■ From many concurrent requests ■ With lots of repetitive tasks ■ But also with lots of creative aspects
How to structure the brain? Or: what is a controller?
Data Path (Hardware)
Control Path OpenFlow
OpenFlow Controller
Unknown flow! Here is a FLOWMOD: Match + Action
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
18 – SDN & NFV: Hoffnung oder Hype?
■ Split controllers into ■ A reusable part: Deals with concurrency, parsing, security, handling
OpenFlow protocol engine, … ■ A dedicated part: Take the actual decisions ■ E.g., fancy multi-path routing scheme, load balancing, …
■ Distinguish between ■ Controller framework – just the reusable part ■ Control application – the dedicated part ■ Controller – the two together
Controller structure to compute flow mods
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
19 – SDN & NFV: Hoffnung oder Hype?
■ Many controller frameworks exist ■ Core difference: APIs to connect the control application ■ Implementation – Many options ■ Separate processes or threads ■ Libraries bound together
■ Examples: Beacon, NOX/POX, Ryu, OpenDaylight, ONOS, … ■ Some are quite simple and straightforward, some very complex ■ Some come with a complete programming philosophy
Controller frameworks
Controller(Platform(
Switch(API( (OpenFlow)(
Monolithic(Controller(
Switches(
App(
Runtime(Control(Flow,(Data(Structures,(etc.((
Pyretic!Controller!Platform!
Switch!API!
Programmer!API!!
(OpenFlow)!
LB!Route Monitor! FW!
Switches!
Apps!
Runtime!
(Pyretic)!
From [3]
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
20 – SDN & NFV: Hoffnung oder Hype?
Example network
Internet
Servers
B
A 1
2 3
SDN Switch w/ labeled ports
From [3]
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
21 – SDN & NFV: Hoffnung oder Hype?
A simple OpenFlow router
21
Pattern Action
B
A 1
2 3
2:dstip=A -‐> fwd(2) 1:* -‐> fwd(1) 2:dstip=B -‐> fwd(3)
Priority
dstip=A
dstip=B dstip!=A dstip!=B
From [3]
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
22 – SDN & NFV: Hoffnung oder Hype?
■ Suppose: Turn the router into a load balancer for servers A and B ■ Which header fields could you use? ■ Which rules are needed? ■ Which priorities?
■ Then: Can we combine ■ router program and ■ load balancer program?
Router turns load balancer
Load Balancer: IP/mod
B"
A"1"
2"3"
Pattern Action
""srcip=0*,dstip=P"0>"mod(dstip=A)"""""""srcip=1*,dstip=P"0>"mod(dstip=B)"
And THAT is the challenge of
SDN! From [3]
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
23 – SDN & NFV: Hoffnung oder Hype?
■ Controller application for access control: Block host 10.0.0.3
def access_control(): return ~(match(srcip=‘10.0.0.3’) |
match(dstip=‘10.0.0.3’) )
■ Access control, then flood:
access_control() >> flood()
■ And many more, with simple operators, embedded in Python
Addressing this challenge: Pyretic (one approach)
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
24 – SDN & NFV: Hoffnung oder Hype?
■ About what to talk to the brain? ■ SDN, OpenFlow: flows ■ Other schemes (e.g., PCE in MPLS): virtual circuits, …
■ When to talk to the brain? ■ Proactively! Reactive only as fallback
■ How to structure the brain? ■ Separate controller framework, control applications ■ Provide means to compose control applications ■ Incorporate into other infrastructure (e.g., Neutron for OpenStack)
■ How many brains, where?
Issues
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
25 – SDN & NFV: Hoffnung oder Hype?
■ Dependability jeopardized ■ Multiple controllers – which does what? ■ Separate roles – master and backup ■ Separate regions – borders? ■ Hierarchies? With repartitioning?
■ Performance jeopardized ■ Local controllers working for regional controllers ■ Low latency! ■ Facility location problem
■ Classic distributed systems & optimization problems!
A single controller?
location in average-latency-optimized placement!
k = 1 k = 5
location in worst-case-latency-optimized placement!
Figure 1: Optimal placements for 1 and 5 controllersin the Internet2 OS3E deployment.
Worst-case latency. An alternative metric is worst-caselatency, defined as the maximum node-to-controller propa-gation delay:
Lwc(S0) = max
(v2V )min
(s2S0)d(v, s) (2)
where again we seek the minimum S0 � S. The relatedoptimization problem is minimum k-center [21].
Nodes within a latency bound. Rather than mini-mizing the average or worst case, we might place controllersto maximize the number of nodes within a latency bound;the general version of this problem on arbitrary overlap-ping sets is called maximum cover [14]. An instance ofthis problem includes a number k and a collection of setsS = S1, S2, ..., Sm, where Si � v1, v2, ..., vn. The objectiveis to find a subset S0 � S of sets, such that |
�Si2S0 Si| is
maximized and |S0| = k. Each set Si comprises all nodeswithin a latency bound from a single node.
In the following sections, we compute only average andworst-case latency, because these metrics consider the dis-tance to every node, unlike nodes within a latency bound.Each optimal placement shown in this paper comes fromdirectly measuring the metrics on all possible combinationsof controllers. This method ensures accurate results, but atthe cost of weeks of CPU time; the complexity is exponentialfor k, since brute force must enumerate every combinationof controllers. To scale the analysis to larger networks orhigher k, the facility location problem literature providesoptions that trade o� solution time and quality, from simplegreedy strategies (pick the next vertex that best minimizeslatency, or pick the vertex farthest away from the current se-lections) to ones that transform an instance of k-center intoother NP-complete problems like independent set, or evenones that use branch-and-bound solvers with Integer LinearProgramming. We leave their application to future work.
5. ANALYSIS OF INTERNET2 OS3EHaving defined our metrics, we now ask a series of ques-
tions to understand the benefits of multiple controllers forthe Internet2 OS3E topology [4]. To provide some intuitionfor placement considerations, Figure 1 shows optimal place-ments for k = 1 and k = 5; the higher density of nodes in thenortheast relative to the west leads to a di�erent optimal setof locations for each metric. For example, to minimize av-erage latency for k = 1, the controller should go in Chicago,which balances the high density of east coast cities with thelower density of cities in the west. To minimize worst-caselatency for k = 1, the controller should go in Kansas Cityinstead, which is closest to the geographic center of the US.
k = 5!4! 3! 2! 1! k = 5!4! 3!2! 1!
Figure 2: Latency CDFs for all possible controllercombinations for k = [1, 5]: average latency (left),worst-case latency (right).
Figure 3: Ratio of random choice to optimal.
5.1 How does placement affect latency?In this topology, placement quality varies widely. A few
placements are pathologically bad, most are mediocre, andonly a small percent approach optimal. Figure 2 shows thisdata as cumulative distributions, covering all possible place-ments for k = 1 to k = 5, with optimal placements at thebottom. All graphs in this paper show one-way network dis-tances, with average-optimized values on the left and worst-case-optimized values on the right. If we simply choose aplacement at random for a small value of k, the averagelatency is between 1.4x and 1.7x larger than that of the op-timal placement, as seen in Figure 3. This ratio is largerfor worst-case latencies; it starts at 1.4x and increases up to2.5x at k = 12. Spending the cycles to optimize a placementis worthwhile.
5.2 How many controllers should we use?It depends. Reducing the average latency to half that at
k = 1 requires three controllers, while the same reductionfor worst-case latency requires four controllers. Assumingwe optimize for one metric, potentially at the expense of theother, where is the point of diminishing returns? Figure 4shows the benefit-to-cost ratios for a range of controllers, de-fined as (lat1/latk)/k. A ratio of 1.0 implies a proportionalreduction; that is, for k controllers, the latency is 1/k of
Figure 4: Cost-benefit ratios: a value of 1.0 indicatesproportional reduction, where k controllers reducelatency to 1
k of the original one-controller latency.Higher is better.
9
From [5]
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
26 – SDN & NFV: Hoffnung oder Hype?
■ About what to talk to the brain? ■ SDN, OpenFlow: flows ■ Other schemes (e.g., PCE in MPLS): virtual circuits, …
■ When to talk to the brain? ■ Proactively! Reactive only as fallback
■ How to structure the brain? ■ Separate controller framework, control applications ■ Provide means to compose control applications ■ Incorporate into other infrastructure (e.g., Neutron for OpenStack)
■ How many brains, where? ■ Complex decision problem; highly depends on scenario
Issues
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
27 – SDN & NFV: Hoffnung oder Hype?
Research Example: MaxiNet: Distributed Emulation of Software-Defined Networks https://www.cs.upb.de/?id=maxinet
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
28 – SDN & NFV: Hoffnung oder Hype?
■ Data centers have ■ A high number of switches
and servers ■ High speed links (10Gbps) ■ High link utilization
■ Evaluate SDN ideas: Use Mininet ■ Emulator, runs many machines/switches as processes in Linux network
namespaces
■ Key: Time dilation ■ Emulate one second of a 10G link by 10 seconds of a 1G link
How to emulate a data center?
...
...
...
...
...
...
... Racks
ToRs
Pods
Core
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
29 – SDN & NFV: Hoffnung oder Hype?
■ MaxiNet is a framework for distributing Mininet emulations onto multiple workers
MaxiNet – Distributing Mininet to Multiple Machines
...
...
...
...
...
...
... Racks
ToRs
Pods
Core
■ Virtual Network ■ Cluster of Workers
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
30 – SDN & NFV: Hoffnung oder Hype?
■ MaxiNet partitions the virtual topology in N parts
■ Switches must not be split in half
■ Goal: minimize edge cut
■ From each partition a new topology is built and emulated using Mininet at a dedicated worker
MaxiNet at a Glance
...
...
...
...
...
...
... Racks
ToRs
Pods
Core
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
31 – SDN & NFV: Hoffnung oder Hype?
■ Generated 60 Seconds of TCP traffic for the data center ■ Clos-like topology, 20 servers per rack, 160 racks ■ 8 ToR switches form a Pod with 2 Pod switches each ■ Core layer consists of 7 switches ■ We emulated 207 switches and 3600 servers ■ Used a time dilation factor of 200 ■ MaxiNet cluster consisted of 12 Machines ■ Intel Xeon E5506 CPU (2x 2.16 Ghz Quadcore), 12 Gb RAM ■ 1 Gbit/s NICs wired to a Cisco Catalyst 2960G-24TC-L switch
■ Implemented ECMP on top of Beacon Controller ■ Controller was placed out-of-band ■ Directly connected to the Cisco Catalyst 2960G-24TC-L
Test Setup
...
...
...
...
...
...
... Racks
ToRs
Pods
Core
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
32 – SDN & NFV: Hoffnung oder Hype?
■ On average 4% CPU utilization and 5 Mbit/s traffic ■ (time dilation factor: 200)
àUsing our ECMP implementation in a real data center, the controller has to be at least 8x faster than our machine in the lab
Result: Load at the OpenFlow Controller
0
2
4
6
8
10
0 2000 4000 6000 8000 10000 12000 0
5
10
15
20
CPU
util
izat
ion
[%]
Dat
a ra
te [M
bit/s
]
Time [s]
CPU utilizationData rate RXData rate TX
CPU utilizationData rate RXData rate TX
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
33 – SDN & NFV: Hoffnung oder Hype?
http://www.cs.uni-paderborn.de/fachgebiete/fachgebiet-rechnernetze/people/philip-wette-msc/dct2gen.html
Research example: DCT2Gen: A Versatile TCP Traffic Generator for Data Centers
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
34 – SDN & NFV: Hoffnung oder Hype?
■ Want: Layer 4 TCP traffic – not easily available for large centers ■ Available: Some observations of Layer 2 traffic
■ Conceivable workflow?
Where to get input traffic for emulation?
observed L2 Traces
observed L2 Traffic Distributions
Analyze
inferred L4 Traffic Distributions
Abstract
generated L4 Traffic Schedule
Generate
Emul
ate
generated L2 Traffic Distributions
Analyze
?=
1
2
3 4
6
Part of DCT2Gen
generated L2 Traces 5
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
35 – SDN & NFV: Hoffnung oder Hype?
■ Payload and ACK traffic, traffic matrices, rack awareness, … ■ Example: Deconvolving payload/ACK traffic
Challenges
Flow size
Flow-Size Distribution(Layer 2)
Flow size
ACK-Size Distribution(Layer 4)
Flow size
Payload-Size Distribution(Layer 4)
implies
determines
FlowsizeCD
F
OriginalConvolvedPayloadACKs
0.0
0
.2
0.
4
0.6
0
.8
1.
0
100 102 104 106 108
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
36 – SDN & NFV: Hoffnung oder Hype?
■ Download our TCP traffic traces ■ Obtain your own L2 traffic, use DCT2Gen to produce your own
TCP traces ■ Change L2 input distributions, obtain synthetic TCP traces
■ Feed TCP traces to evaluation tool ■ DCT2Gen combines particularly well with Maxinet ■ But also with Mininet, network simulators, …
DCT2Gen: Practical use case
Let’s start a collection! “SNDLib” for data centers!
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
37 – SDN & NFV: Hoffnung oder Hype?
Research example: MAC Addresses as Efficient Routing Labels in Data Centers
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
38 – SDN & NFV: Hoffnung oder Hype?
■ Forwarding in Data Centers
Problem statement
00:aa:cc -> 0a:3f:92
00:3f:92 Port 1
2a:77:b4 Port 2
a6:d7:f9 Port 3
2c:1c:66 Port 1
af:e2:8f Port 1
… …
00:aa:cc -> 2a:77:b4 00:aa:cc -> af:e2:8f
00:aa:cc -> a6:d7:f9 00:aa:cc -> 2c:1c:66
00:3f:92 Port 1
2a:77:b4 Port 2
00:3f:92 Port 1
■ Very large (#hosts) ■ Cannot be aggregated
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
39 – SDN & NFV: Hoffnung oder Hype?
Using labels and wildcard
Label: ffa00001 00:aa:cc -> 0a:3f:92 ffa????1 Port 1
ffa????2 Port 2
ffa????3 Port 3
Ffa????4 Port 4 Label: ffa05001
00:aa:cc -> 2c:1c:66
Label: ff0aa01 00:aa:cc -> af:e2:8f
Label: ffa02303 00:aa:cc -> a6:d7:f9
Label: ffa00101 00:aa:cc -> 2a:77:b4
ffa00001 Port 1
ffa00002 Port 2
ffa00001 Port 1 ffa? ? ? ? 1 Port 1
ffa00002 Port 2 Label: ffa00002 00:aa:cc -> 2a:77:b4
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
40 – SDN & NFV: Hoffnung oder Hype?
Adding labels
Label: ffa00001 00:aa:cc -> 0a:3f:92 00:aa:cc -> 0a:3f:92
ffa? ? ? ? 1 Port 1
ffa? ? ? ? 2 Port 2
0a:3f:92 Port 1 ffa00001
df:47:3b Port 3 eea0002
77:3b:26
Port 1 ffa07001
fa:00:1c Port 3 eea0007a
… …
10.0.0.2 0a:3f:92
10.0.0.3 df:47:3b
10.0.0.4 77:3b:26
10.0.0.5 fa:00:1c
… …
10.0.0.2 ff:a0:01
10.0.0.3 ee:a0:02
10.0.0.4 ff:a7:01
10.0.0.5 ee:a0:7a
… …
ff:??:?? Port 1
ee:??:?? Port 3
cc:??:??
Port 2
ff:a? : ? 1 Port 1
ff:a? : ? 2 Port 2
ff:a0:01
ee:a0:02
ff:a7:01
ee:a0:7a Can be messed up using ARP by an SDN controller
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
41 – SDN & NFV: Hoffnung oder Hype?
■ Objective ■ Find labels for minimal table sizes ■ No operating system changes
■ Find labels ■ NP complete, good heuristic available
Challenges
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
42 – SDN & NFV: Hoffnung oder Hype?
■ Software-defined networks archetypical example of a hype ■ Lot’s of attention to a concept that existed (under different name) for a long
time already ■ But now with better marketing
■ Huge industry interest (and politics) ■ Standardization fora: Open Network Foundation, IRTF SDN-RG ■ Industry consortium for controller platform: OpenDaylight (Cisco vs.
BigSwitch, …) ■ Coexistence of controller platforms open question
■ SDN in isolation ■ Advantages, but not complete band-aid ■ Claim: Gets really useful when combined with application knowledge
Intermediate summary: SDN
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
43 – SDN & NFV: Hoffnung oder Hype?
■ Software-defined networking ■ Technical context ■ Issues ■ Research examples
■ Network function virtualization ■ Technical context ■ Research examples
Overview
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
44 – SDN & NFV: Hoffnung oder Hype?
■ ISP networks operate many functions on the actual flows ■ (And not just on signalling data, as in SDN) ■ Examples: firewalls, deep packet inspection, load balancers, signal
processing (e.g., CoMP in mobile access networks)
■ Conventional approach: one function, one box ■ Expensive, slow rollout, … ■ Virtualize!
=> Virtualized Network Functions! ■ Commodity boxes operate on packet flows ■ Rollout: ■ Install/activate software on box ■ Adapt routing (=> relationship to SDN)
Network Function Virtualization in ISP networks
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
45 – SDN & NFV: Hoffnung oder Hype?
■ Heavily pushed by ISPs ■ Still lot’s of research as well as development going on ■ Big EU projects: T-NOVA, Unify, …
■ Key questions: Architecture, interfaces, orchestration approaches
■ Initial standardization efforts: ETSI MANO work group
NFV – Current developments
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
46 – SDN & NFV: Hoffnung oder Hype?
ETSI MANO Network Function Virtualization Components
NFV Infrastructure (NFVI)
Compute Storage Network HW Resources
Virtualization Layer Virtualization SW
Virtual Compute
Virtual Storage
Virtual Network
Virtual Resources
Logical Abstractions
Network Service
VNF VNF VNF
VNF VNF
Logical Links
VNF Instances
VNF VNF VNF SW Instances
VNF : Virtualized Network Function
VNF
End Point
End Point
E2E Network Service
VNF Forwarding Graph aka. Service Chain
3
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
47 – SDN & NFV: Hoffnung oder Hype?
ETSI MANO: Management and Orchestration Architecture NFV Management and Orchestration Architecture
VNF��
Virtualised
Infrastructure Manager (VIM) �
VNF Manager (VNFM)�
NFVI
Execution reference points Main NFV reference points Other reference points
EMS��
VNF Catalog
Or-Vi
VeNf-Vnfm
Os-Nfvo
VeEn-Vnfm
Nf-Vi
Vn-Nf
NFV Orchestrator (NFVO) ��
NFV-MANO
OSS/BSS
NFVI Resources
Or-Vnfm
Vnfm-Vi
NFV Instances
NS Catalog
Source: ETSI NFV MANO WI document (ongoing work) 5
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
48 – SDN & NFV: Hoffnung oder Hype?
Research example: Local heuristics for individual flow processor placement
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
49 – SDN & NFV: Hoffnung oder Hype?
■ Similar problem to SDN controller placement – but now, flows have to pass through controllers ■ In SDN, controller usually not on the data path ■ Formally: Facility location problem combined with multi-commodity flow
problem
■ Variants: ■ Just process a flow (possibly combining multiple flows): Mobile backhaul
network ■ Process flow and return answer: Distributed cloud computing (DCC) ■ Sever or network function: Does not really matter ■ Here: local heuristic
Where to process which flow?
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
50 – SDN & NFV: Hoffnung oder Hype?
■ Idea: assign customers to nearby facilities ■ Until facility is full, then keep looking ■ Open a facility when a user “arrives” ■ Essentially: expanding ring search
NFV/DCC: Local placement heuristic
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
51 – SDN & NFV: Hoffnung oder Hype?
Research example: Specifying and Placing Chains of Virtual Network Functions
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
52 – SDN & NFV: Hoffnung oder Hype?
■ Usually: more than one network function needed to process a flow ■ Network functions form chains ■ ETSI Mano: Virtual Network Function Forwarding Graph
■ How to specify a function chain? ■ Simple: fully, each step – limits flexibility of placement ■ Flexible: leave concurrency in the specification – order does not always
matter! ■ Placement can trade off processing against data rate, delay, …
From individual network functions to chains
f1 f2 LB
f3
f3
f3
LB
f3
f3
f3
f1 f2
f1 f2
f1 f2
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
53 – SDN & NFV: Hoffnung oder Hype?
■ Current work: Templates for NFV chains ■ Do not fix the chain structure ■ But rather specify relative performance, capabilities of each stage ■ E.g.: “one database can serve 10 web servers”
■ Then: embed the template, adapting it to current load situation ■ Using relationships encoded in the template
Extension: Template embedding
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
54 – SDN & NFV: Hoffnung oder Hype?
■ Heavy original push from industry, rather than from academia ■ Addresses concrete, real-world problem ■ But perhaps with less sparkle than SDN
■ First standardization under way, yet still considerable open issues ■ Scalability, non-trivial network functions, …
Intermediate Summary: NFV
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
56 – SDN & NFV: Hoffnung oder Hype?
■ Not just software-defined networks, but also software-defined infrastructure
■ Not just software-defined infrastructure, but also “software-defined software” ■ Actually: software is created at usage time, using request information ■ Configure software components automatically
SFB 901: On-the-fly computing
http://sfb901.uni-paderborn.de/
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
57 – SDN & NFV: Hoffnung oder Hype?
■ Our infrastructures will become more flexible, more adaptive ■ Not only networking: Servers in any case, storage is getting there ■ There is hope that this simplify infrastructures ■ There is unquestionably incredible hype
■ Do we have the knowledge to exploit that?
■ Do we have the foresight to exploit that?
Conclusions
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
59 – SDN & NFV: Hoffnung oder Hype?
1. Various talks by Nick McKeown on SDN, http://yuba.stanford.edu/~nickm/talks.html
2. B. Heller et al., Tutorial: SDN for Engineers, Open Networking Summit, Santa Clara, April 2012. http://www.opennetsummit.org/tutorials.html
3. J. Reich, Modular SDN Programming with Pyretic, Princeton. http://www.frenetic-‐lang.org/pyretic
4. M. Casado, T. Koponen, S. Shenker, and A. Tootoonchian, “Fabric: a retrospective on evolving SDN,” in Proceedings of the first workshop on Hot topics in software defined networks - HotSDN ’12, 2012, pp. 85–89.
5. B. Heller, R. Sherwood, and N. McKeown, “The controller placement problem,” in ACM SIGCOMM Computer Communication Review, 2012, vol. 42, no. 4, p. 473.
6. See http://www.nec-labs.com/~lume/sdn-reading-list.html for a very good list of papers!
References
© H
einz
Nix
dorf
Inst
itut,
Uni
vers
ität P
ader
born
60 – SDN & NFV: Hoffnung oder Hype?
■ Backup slides
srcip=0*,dstip=P -‐> mod(dstip=A) srcip=1*,dstip=P -‐> mod(dstip=B)
dstip=A -‐> fwd(2) dstip=B -‐> fwd(3) * -‐> fwd(1)
Balance then Route (in Sequence)
61
Combined Rules? (only one match)
srcip=0*,dstip=P -‐> mod(dstip=A) srcip=1*,dstip=P -‐> mod(dstip=B) dstip=A -‐> fwd(2) dstip=B -‐> fwd(3) * -‐> fwd(1)
Balances w/o Forwarding! srcip=0*,dstip=P -‐> mod(dstip=A)
srcip=1*,dstip=P -‐> mod(dstip=B)
dstip=A -‐> fwd(2) dstip=B -‐> fwd(3) * -‐> fwd(1) Forwards w/o
Balancing!
dstip = 10.0.0.2 à fwd(2) dstip = 10.0.0.3 à fwd(3) * à fwd(1)
srcip = 5.6.7.8 à count dstip = 10.0.0.2 à fwd(2) dstip = 10.0.0.3 à fwd(3) * à fwd(1)
Forwards but doesn’t count
Route and Monitor (in Parallel)
62
IP = 10.0.0.2
IP = 10.0.0.3
2 3
1
Monitor Route
dstip = 10.0.0.2 à fwd(2) dstip = 10.0.0.3 à fwd(3) * à fwd(1)
srcip = 5.6.7.8 à count
Counts but doesn’t forward
srcip = 5.6.7.8 à count
Combined rules installed on switch?
srcip = 5.6.7.8 à count dstip = 10.0.0.2 à fwd(2) dstip = 10.0.0.3 à fwd(3) * à fwd(1)
Requires a Cross Product [ICFP’11, POPL’12]
63
IP = 10.0.0.2
IP = 10.0.0.3
2 3
1
Monitor Route
srcip = 5.6.7.8 , dstip = 10.0.0.2 à fwd(2) , count srcip = 5.6.7.8 , dstip = 10.0.0.3 à fwd(3) , count srcip = 5.6.7.8 à fwd(1) , count dstip = 10.0.0.2 à fwd(2) dstip = 10.0.0.3 à fwd(3) * à fwd(1)