OpenStack Networking Topics

Embed Size (px)

DESCRIPTION

Core OpenStack networking concepts with Quantum.Advanced OpenStack networking design options.OpenStack tenant models and their relation to networking models.

Citation preview

Enterprise DevOps at Scale

OpenStack Networking Topics6/3/2013Jeffery Padgett (Gap Inc Infrastructure)@jbpadgettCool Geeks DevOps Meetup

OpenStack Install There is lots of information on the web over the past few years about the OpenStack Project.There are in fact many install guides.There are also many folks and organizations that have made the OpenStack install easy by rolling up web served shell scripts, cookbooks, manifests, and even distros for getting your own OpenStack.

AssumptionsMost of the instructions, scripts, and distros for Openstack installs on the web have to make some assumptions about you:Assumption #1: You are a typical impatient DEV that wants it up NOW. Figure out details later.Assumption #2: You are an OpenStack N00b and cant comprehend what the heck you are getting into yet.Assumption #3: You are playing around in a lab or local dev environment using VMs on vagrant or similar.Assumption #4: You have smart engineers and network folks to help you out when you intend to use OpenStack on real hardware with real users. In other words, you know enough to dig DEEP on complex topics.

What is Missing?With all these nice people and organizations on the web making OpenStack easy to install, there is something critical missing:

Proper planning for networkingAdequate Instructions for configuring networking in OpenStack

What are the key components in OpenStack Networking?First things first.Lets break down all the areas networking will affect your install:Hypervisor NetworkingNetwork L2/L3 Physical Switch ConfigurationOpenStack Network Architectures OptionsDesired L3 Routing & IP Addressing ModelsOpenStack Tenant Models & Guest Instances NetworkingOpenStack Networking Ecosystem Design DependenciesLets just be upfront here on OpenStack networking:This is an ecosystem stack.One design choice does affect the other.Lack of planning can and will bite you.Hypervisor NetworkingThere are several types of servers in OpenStack. Typically they are all deployed as hypervisors (though not required for all). They all should be configured with as robust a network design as possible.

Controller NodesCompute NodesNetwork NodesHypervisor NetworkingOpenStack Networks and Physical HardwareHere is the reference architecture taken straight from the OpenStack documentation.

Hypervisor NetworkingController NodesController nodes are the brains of an OpenStack deployment. They communicate with all OpenStack nodes. Networking issues that are important here are:OpenStack API Network (public/service-facing)This is a service VLAN/ IP range meaning it should be reachable via the internet or in the case of private/corp network installs using an IP range that is L3 reachable on the LAN/WAN.OpenStack Mgmt Network (backend facing)This is a backend facing VLAN/IP range meaning it only needs to be reachable all compute, network, and controller nodes within a given availability zone/data center. This means it can be a simple L2 VLAN.Hypervisor NetworkingNetwork NodesNetwork nodes are special nodes within OpenStack. They allow private networks that tenant guest nodes use to either be directly reachable or translated via NAT. In other words, they are the networking brains behind OpenStack creating bridges among all the compute nodes and the guests they host.Network nodes have changed significantly with the introduction of the QUANTUM project. Quantum replaced the legacy Nova Network networking model in OpenStack. Quantum has become more robust and reliable with the Grizzly release, but still has an HA limitation (Nova-Network Multi-mode feature parity).Basically nothing happens in OpenStack without a properly functioning and configured Network Node.Networking issues that are important here are:OpenStack External Network (public/service-facing)This is a service VLAN/ IP range meaning it should be reachable via the internet or in the case of private/corp network installs using an IP range that is L3 reachable on the LAN/WAN.OpenStack Mgmt Network (backend facing)This is a backend facing VLAN/IP range meaning it only needs to be reachable by all compute, network, and controller nodes within a given availability zone/data center. This means it can be a simple L2 VLAN.Openstack Data Network (backend facing)This is a backend facing VLAN/IP range meaning it only needs to be reachable by all compute and network nodes within a given availability zone/data center. This means it can be a simple L2 VLAN.

Hypervisor NetworkingCompute NodesCompute nodes are where all the guest virtual machines or instances live in an OpenStack deployment.The compute nodes must be able to speak on the network to the network nodes.In the legacy Nova Network days with multi-host mode, you would find the compute and network node services running on every single compute node for HA.This will likely be the same path for future releases of Quantum since it provided a robust HA model for deployment.Networking issues that are important here are:OpenStack Mgmt Network (backend facing)This is a backend facing VLAN/IP range meaning it only needs to be reachable by all compute, network, and controller nodes within a given availability zone/data center. This means it can be a simple L2 VLAN.OpenStack Data Network (backend facing)This is a backend facing VLAN/IP range meaning it only needs to be reachable by all compute and network nodes within a given availability zone/data center. This means it can be a simple L2 VLAN.

Sample Hypervisor Networking Design Template

OpenStack Networking Architecture ModelsBefore you can select an appropriate OpenStack networking architecture model, you need to understand the key components. Each of these components provides services that you might expect and some that you might mistakenly (at least for today) try to do yourself.Core OpenStack Quantum Networking components:Network - An isolated L2 segment, analogous to VLAN in physical networking.Subnet - A block of v4 or v6 IP addresses and associated configuration state.Port - A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port. Plugin Agent- Local vswitch configuration.DHCP Agent- DHCP to Tenants.L3 Agent- Layer 3 + NAT for guest instances.OpenStack Networking Architecture ModelsPlugin Agent Local vSwitch configuration.Depending on your desired configuration, you can choose to use open vSwitch for handling your virtual distributed switching traffic, or you can opt to use other vSwitch providers via a plugin architecture.Some prefer to keep all control plane operations of switching managed on a particular platform and/or group. This is completely up to you. For purposes of this discussion, we will assume the open vSwitch approach.Here are the current vDS plugins supported by OpenStack today:

OpenStack Networking Architecture ModelsOpenStack (with Quantum) offers 5 primary network architecture design models:Single Flat NetworkMultiple Flat NetworkMixed Flat & Private NetworkProvider Router with Private NetworksPer Tenant Router with Private NetworksOpenStack Networking Architecture ModelsSingle Flat NetworkVM IP addresses exposed to LAN.Flat DHCP Manager gives out public IP addresses via DNSMasq on the Network node.Uses external L3 router.No floating IPs (NAT) here since all IPs dished out are public.OpenStack Networking Architecture ModelsMultiple Flat NetworkMany L2 VLANs implemented with one or multiple Tenants.VM IP addresses exposed to LAN.Flat DHCP Manager gives out public IP addresses via DNSMasq on the Network node.Uses external L3 router.No floating IPs (NAT) here since all IPs dished out are public.OpenStack Networking Architecture ModelsMixed Flat & Private NetworkPrivate VLANs and Public VLANsFlat DHCP Manager gives out IP addresses via DNSMasq on the Network node.A VM can performs NAT & Routing for private nets to public nets.Tenants can create local Networks just for them.OpenStack Networking Architecture ModelsProvider Router + Private NetworksFloating Public IPsTenants can create local Networks just for them.A virtual or physical provider router does NAT for private IPs to public ones using SNAT. Flat DHCP Manager gives out IP addresses via DNSMasq on the Network node.OpenStack Networking Architecture ModelsProvider Router + Private NetworksFloating Public IPsTenants can create local Networks just for them.Tenants get a virtual or physical provider router doing NAT for private IPs to public ones using SNAT. Flat DHCP Manager gives out IP addresses via DNSMasq on the Network node.The provider still provides a physical router for all public IPs.Single & Multiple Flat Networking Architecture

Mixed Flat & Private Network

Provider-Router+Private-Networks

Per-Tenant-Routers+Private-Networks

OpenStack Networking Architecture Model ExampleMultiple Flat Network Architecture Model EXAMPLEA simple reference OpenStack network architecture for a typical private company with their own servers and network equipment.You control the network connectivityYour private network is really your private VLANs that are routable within your enterprise.Mapping VLANs and Subnets to Tenants becomes conceptually easier to groc.Sample Tenant-Guest Instances Networking Design Template

OpenStack Networking & Tenant Consumption ModelsKey Design Concept:Your Tenant or Consumption Model as I call it always maps back to a VLAN and associated IP Subnets.So, plan to reserve some ranges of VLANs and IP subnets according to how you think you may want to deploy machine instances.OpenStack Networking & Tenant Consumption ModelsConsumption/Tenant Model Examples:Monolithic Enterprise Infra Tenant ModelOne or multiple networks with associated VLANs/subnets.This tenant builds things on behalf of everyone when end user self service provisioning is not as important as machines just getting built for customers.Think of this as using an OpenStack tenant like traditional vmware Vcenter.Group/ Org Tenant with many Users ModelAll users for the tenant share a network(s) and its associated VLAN/Subnet(s).Provides a good way to give bunch of developers in a team the ability to build machines with a shared tenant. They just are a user within the tenant.There is no security segmentation among instances built in this model. Meaning a developer can access any other developers box. This is likely not to matter for a team working together.Per Group/ Per Developer Tenant ModelGiving out a full tenant account to every group or developer. By each team/developer getting their own tenant account, their machines are segmented off via tenant security.If used internally, can be seen as wasteful for subnet allocations unless they are small.Each tenant gets their own network and associated VLAN/Subnet(s).OpenStack Networking & Tenant Consumption ModelsConsumption/Tenant Models and IP Address Planning (IPAM)There is a design choice upfront when creating networks to choose how to map the subnets to a given tenant account. You need to be careful when creating networks to choose the --shared argument if you intend to have multiple tenants on the same IP subnet. If you dont choose this argument Quantum will assume a single tenant will use that network subnet(s).

SHARED TENANT ACCOUNT MODELCan use dedicated subnetsCan use shared subnetsUsually a larger allocation of IP addresses or subnets

DEDICATED TENANT ACCOUNT MODELCan use dedicated subnetsCan use shared subnetsCan be a large or small allocation of IP addresses or subnets

OpenStack Networking & Tenant Consumption Models ExampleNetwork Allocations using different Tenant Consumption Models:Tenant Yoda (shared tenant, multiple users, multiple flat network model)quantum net-create dagobah-net1 shared --provider:network_type flat --provider:physical_network dagobah-datacenter1 --router:external=Truequantum subnet-create dagobah-net1 10.10.10.0/24quantum subnet-create dagobah-net1 10.10.20.0/24

quantum net-create dagobah-net2 shared --provider:network_type flat --provider:physical_network dagobah-datacenter1 --router:external=Truequantum subnet-create dagobah-net2 10.10.30.0/24

OpenStack Hypervisor NetworkingHypervisor Networking Workflow:Configure the Network switches with dot1q VLAN trunking.Configure dot1q bond interfaces on each hypervisor for the four required OpenStack VLANs + any other functional VLANs you may want (iSCSI, NAS, etc) as subinterfaces.Configure bridge interfaces for each of the VLANs as subinterfaces. OpenStack Hypervisor Networking VLANS & BOND INTERFACES* VLANS are dot1q trunked from the switches to the server NIC ports.* For every VLAN that the server needs an IP interface, create a bond.x software NIC* For any VLAN that just has guest VMs inside it, but the hypervisor does not need to have an IP interface, a bond.x software NIC is not needed.

NOTES:For a channel bonding interface to be valid, the kernel module must be loaded. To ensure that the module is loaded when the channel bonding interface is brought up, create a new file as root named bonding.conf in the /etc/modprobe.d/ directory. Note that you can name this file anything you like as long as it ends with a .conf extension.Parameters for the bonding kernel module must be specified as a space-separated list in the BONDING_OPTS="bonding parameters" directive in the ifcfg-bondN interface file.Do not specify options for the bonding device in /etc/modprobe.d/bonding.conf, or in the deprecated /etc/modprobe.conf file.

Add this to the bonding.confalias bond0 bondingOpenStack Hypervisor NetworkingLINUX BRIDGES & BONDING

Bonding makes 2 physical NICs act as one from the linux perspective.Cisco Virtual Port Channels (vpc) makes both physical nics active at the same time on 2 different network switches.Linux Bridges make KVM virtual machines with virtual NICs be able to share the same logical bonded NIC with the HOST OS.

Do not create linux bridges between two different VLANs.Create one linux bridge per VLAN. (Avoid mixing different VLANs in a single bridge.)ifcfg-br100ifcfg-br101This is different from Linux Bond interfaces which facilitate dot1q VLAN trunking for multiple VLANs.Linux bridges allow the LINUX OS to share a bond0.xxx VLAN interface with KVM virtual machines for guest VM networking.ifcfg-br100SLAVE="bond0.100""OpenStack Hypervisor NetworkingHypervisor ifcfg files for Enterprise Linux Distros/etc/sysconfig/network-scripts/

Physical NICsifcfg-em1ifcfg-em2

Bond Interfacesifcfg-bond0ifcfg-bond0.100ifcfg-bond0.101

Bridge Interfacesifcfg-br100ifcfg-br101

Physical Interfaces Examplesifcfg-em1DEVICE="em1"MASTER="bond0"SLAVE="yes"NM_CONTROLLED="no"ONBOOT="yes"TYPE="Ethernet"BOOTPROTO="none"IPV6INIT="no"HWADDR=00:00:00:00:00:00"

ifcfg-em2DEVICE="em2"MASTER="bond0"SLAVE="yes"NM_CONTROLLED="no"ONBOOT="yes"TYPE="Ethernet"BOOTPROTO="none"IPV6INIT="no"HWADDR=00:00:00:00:00:00"

Bond Interfaces Examplesifcfg-bond0DEVICE="bond0"BOOTPROTO="none"ONBOOT="yes"TYPE="Ethernet"BONDING_OPTS="mode=4 miimon=100"IPV6INIT="no"MTU="9000

ifcfg-bond0.100DEVICE="bond0.100"ONBOOT="yes"VLAN="yes"TYPE="Ethernet"BOOTPROTO="static"BRIDGE="br100

ifcfg-bond0.101DEVICE="bond0.101"ONBOOT="yes"VLAN="yes"TYPE="Ethernet"BOOTPROTO="static"BRIDGE="br101

Bridge Interfaces Examplesifcfg-br100DEVICE="br100"ONBOOT="yes"VLAN="yes"TYPE="Bridge"SLAVE="bond0.100"HOSTNAME=yoda1.dagobah.com"IPADDR="10.10.100.99"NETMASK="255.255.255.0"DNS1=8.8.8.8"GATEWAY="10.10.100.1"IPV6INIT="no"MTU="1500

ifcfg-br101DEVICE="br101"ONBOOT="yes"VLAN="yes"TYPE="Bridge"SLAVE="bond0.101"IPADDR="10.10.101.99"NETMASK="255.255.255.0"DNS1=8.8.8.8"GATEWAY="10.10.101.1"IPV6INIT="no"MTU="1500"

Thanks!@jbpadgetthttp://Padgeblog.com