View
4.936
Download
0
Category
Preview:
DESCRIPTION
Introduction to Open Network Operating System. An open source distributed SDN OS from ON.Lab. www.onlab.us.
Citation preview
ONOS Open Network Opera.ng System
An Experimental Open-‐Source Distributed SDN OS
Umesh Krishnaswamy, Pankaj Berde, Jonathan Hart, Masayoshi Kobayashi, Pavlin Radoslavov, Tim Lindberg, Rachel Sverdlov, Suibin Zhang, William Snow, Guru Parulkar
RouFng TE
Network OS
Packet Forwarding
Packet Forwarding
Packet Forwarding
Mobility
Programmable Base StaFon
Openflow Scale-‐out?
Fault Tolerance?
How to realize global network view?
Logically Centralized NOS – Key Ques.ons
Global Network View
Prior Work
Distributed control plaIorm for large-‐scale networks
Focus on reliability, scalability, and generality
State distribu.on primi.ves, global network view, ONIX API ONIX
Other Work
Helios (NEC), Midonet (Midokura), Hyperflow, Maestro, Kandoo
NOX, POX, Beacon, Floodlight, Trema controllers
Community needs an open source distributed SDN OS
Host
Host
Host
Titan Graph DB
Cassandra In-‐Memory DHT
Instance 1 Instance 2 Instance 3
Network Graph Eventually consistent
Distributed Registry Strongly Consistent Zookeeper
OF Controller
Floodlight
OF Controller
Floodlight
OF Controller
Floodlight
ONOS High Level Architecture
ONOS Scale-‐Out
Distributed Network OS
Instance 2
Instance 3
Instance 1
Network Graph Global network view
An instance is responsible for maintaining a part of network graph
Control capacity can grow with network size or applicaFon need
Data plane
Master Switch A = ONOS1
Candidates = ONOS 2, ONOS 3
Master Switch A = ONOS1
Candidates = ONOS 2, ONOS 3
Master Switch A = ONOS 1
Candidates = ONOS 2, ONOS 3
ONOS Control Plane Failover
Distributed Network OS
Instance 2
Instance 3
Instance 1
Distributed Registry
Host
Host
Host
A
B
C
D
E
F
Master Switch A = NONE
Candidates = ONOS 2, ONOS3
Master Switch A = NONE
Candidates = ONOS 2, ONOS3
Master Switch A = NONE
Candidates = ONOS 2, ONOS3
Master Switch A = ONOS2 Candidates = ONOS3
Master Switch A = ONOS2 Candidates = ONOS3
Master Switch A = ONOS2 Candidates = ONOS3
Cassandra In-‐memory DHT
Id: 1 A
Id: 101, Label
Id: 103, Label
Id: 2 C
Id: 3 B
Id: 102, Label
Id: 104, Label
Id: 106, Label
Id: 105, Label
Network Graph
Titan Graph DB
ONOS Network Graph Abstrac.on
Network Graph
port
switch port
device
port
on port
port
port
link switch
on
device
host host
Ø Network state is naturally represented as a graph Ø Graph has basic network objects like switch, port, device and links Ø Applica.on writes to this graph & programs the data plane
Example: Path Computa.on App on Network Graph
port
switch port
device
Flow path Flow entry
port
on port
port
port
link switch
inport
on
Flow entry
device
outport switch switch
host host
flow flow
• Applica.on computes path by traversing the links from source to des.na.on • Applica.on writes each flow entry for the path
Thus path computa.on app does not need to worry about topology maintenance
Example: A simpler abstrac.on on network graph?
Logical Crossbar
port
switch port
device
Edge Port
port
on port
port
port
link switch
physical
on
Edge Port
device
physical
host host
• App or service on top of ONOS • Maintains mapping from simpler to complex
Thus makes applica.ons even simpler and enables new abstrac.ons
Virtual network objects
Real network objects
Network Graph in Titan and Cassandra
Flow path
Flow entry
Flow entry
flow
flow
Vertex with 10 properFes
Vertex with 11 properFes
Vertex represented as Cassandra row
Property (e.g. dpid)
Property (e.g. state)
Property … Edge Edge
Edge represented as Cassandra
column
Column Value
Label id + direcFon
Primary key
Edge id Vertex id Signature properFes
Other properFes
Switch
Vertex with 3 properFes
Row indices for fast vertex centric queries
Switch Manager Switch Manager Switch Manager
Network Graph: Switches
OF OF
OF OF
OF OF
Network Graph and Switches
SM
Network Graph: Links
SM SM
Link Discovery Link Discovery Link Discovery
LLDP LLDP
Network Graph and Link Discovery
Network Graph: Devices
SM SM SM LD LD LD
Device Manager Device Manager Device Manager
PKTIN
PKTIN
PKTIN Host
Host
Host
Devices and Network Graph
SM SM SM LD LD LD
Host
Host
Host
DM DM DM
Path Computa.on Path Computa.on Path Computa.on
Network Graph: Flow Paths
Flow 1
Flow 4
Flow 7
Flow 2
Flow 5
Flow 3
Flow 6
Flow 8
Flow entries Flow entries Flow entries
Flow entries Flow entries Flow entries
Flow entries Flow entries Flow entries
Flow entries Flow entries Flow entries
Flow entries Flow entries Flow entries
Flow entries Flow entries Flow entries
Flow entries Flow entries Flow entries
Flow entries Flow entries Flow entries
Path Computa.on with Network Graph
SM SM SM LD LD LD
Host
Host
Host
DM DM DM
Flow Manager
Network Graph: Flows PC PC PC
Flow Manager Flow Manager Flowmod Flowmod
Flowmod
Flow 1
Flow 4
Flow 7
Flow 2
Flow 5
Flow 3
Flow 6
Flow 8
Flow entries Flow entries Flow entries
Flow entries Flow entries Flow entries
Flow entries Flow entries Flow entries
Flow entries Flow entries Flow entries
Flow entries Flow entries Flow entries
Flow entries Flow entries Flow entries
Flow entries Flow entries Flow entries
Flow entries Flow entries Flow entries
Network Graph and Flow Manager
Consistency Defini.on
Ø Strong Consistency: Upon an update to the network state by an
instance, all subsequent reads by any instance returns the last updated
value.
Ø Strong consistency adds complexity and latency to distributed data
management.
Ø Eventual consistency is slight relaxa.on – allowing readers to be behind
for a short period of .me.
Strong Consistency using Registry
Distributed Network OS
Instance 2
Instance 3
Network Graph
Instance 1
A = Switch A
Master = NONE A = ONOS 1
Timeline
All instances Switch A Master = NONE
Instance 1 Switch A Master = ONOS 1 Instance 2 Switch A Master = ONOS 1 Instance 3 Switch A Master = ONOS 1
Master elected for switch A
Registry Switch A Master = NONE Switch A
Master = ONOS 1 Switch A
Master = ONOS 1 Switch A
Master = NONE Switch A
Master = ONOS 1
Cost of Locking
All instances Switch A Master = NONE
Why Strong Consistency is needed for Master Elec.on
Ø Weaker consistency might mean Master elec.on on
instance 1 will not be available on other instances.
Ø That can lead to having mul.ple masters for a switch.
Ø Mul.ple Masters will break our seman.c of control
isola.on.
Ø Strong locking seman.c is needed for Master Elec.on
Eventual Consistency in Network Graph
Distributed Network OS
Instance 2
Instance 3
Network Graph
Instance 1
SWITCH A STATE= INACTIVE
Switch A State = INACTIVE
Switch A STATE = INACTIVE
All instances Switch A STATE = ACTIVE
Instance 1 Switch A = ACTIVE Instance 2 Switch A = INACTIVE Instance 3 Switch A = INACTIVE
DHT
Switch Connected to ONOS
Switch A State = ACTIVE
Switch A State = ACTIVE
Switch A STATE = ACTIVE
Timeline
All instances Switch A STATE = INACTIVE
Consistency Cost
Cost of Eventual Consistency
Ø Short delay will mean the switch A state is not ACTIVE on
some ONOS instances in previous example.
Ø Applica.ons on one instance will compute flow through the
switch A while other instances will not use the switch A for
path computa.on.
Ø Eventual consistency becomes more visible during control
plane network conges.on.
Why is Eventual Consistency good enough for Network State?
Ø Physical network state changes asynchronously Ø Strong consistency across data and control plane is too hard Ø Control apps know how to deal with eventual consistency
Ø In the current distributed control plane, each router makes its
own decision based on old info from other parts of the network
and it works fine
Ø Strong Consistency is more likely to lead to inaccuracy of
network state as network congesFons are real.
What is Next for ONOS
ONOS Core
ONOS Apps
Performance tuning
Events, callbacks and publish/subscribe API
Expand graph abstrac.on for more types of network state
Intelligent control plane: SDN-‐IP
Demonstrate two new use cases by end of the year
Community
Limited availability release in Q3
Build and assist developer community outside ON.LAB
Support deployments in R&E networks
www.onlab.us
Recommended