Upload
lamkhuong
View
218
Download
0
Embed Size (px)
Citation preview
Juniper Networks OpenStack Neutron Plug-in
Modified: 2017-09-11
Copyright © 2017, Juniper Networks, Inc.
Juniper Networks, Inc.1133 InnovationWaySunnyvale, California 94089USA408-745-2000www.juniper.net
Copyright © 2017 Juniper Networks, Inc. All rights reserved.
Juniper Networks, the Juniper Networks logo, Juniper, and Junos are registered trademarks of Juniper Networks, Inc. and/or its affiliates inthe United States and other countries. All other trademarks may be property of their respective owners.
Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify,transfer, or otherwise revise this publication without notice.
Juniper Networks OpenStack Neutron Plug-inCopyright © 2017 Juniper Networks, Inc. All rights reserved.
The information in this document is current as of the date on the title page.
YEAR 2000 NOTICE
Juniper Networks hardware and software products are Year 2000 compliant. Junos OS has no known time-related limitations through theyear 2038. However, the NTP application is known to have some difficulty in the year 2036.
ENDUSER LICENSE AGREEMENT
The Juniper Networks product that is the subject of this technical documentation consists of (or is intended for use with) Juniper Networkssoftware. Use of such software is subject to the terms and conditions of the End User License Agreement (“EULA”) posted athttp://www.juniper.net/support/eula/. By downloading, installing or using such software, you agree to the terms and conditions of thatEULA.
Copyright © 2017, Juniper Networks, Inc.ii
Table of Contents
About the Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Documentation and Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Supported Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Documentation Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Documentation Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Requesting Technical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Self-Help Online Tools and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Opening a Case with JTAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Chapter 1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
What's New in Juniper OpenStack Neutron Plug-in . . . . . . . . . . . . . . . . . . . . . . . . 15
Overview of the OpenStack Neutron Plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Types of OpenStack Neutron Plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Chapter 2 Installing and Configuring the OpenStack Neurton Plug-in . . . . . . . . . . . . . 19
Installing the OpenStack Neutron Plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Installation Scripts for Juniper Plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Setting Up L2 and L3 Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Setting Up the Physical Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Configuring the ML2 Plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Configuring the ML2 VLAN Plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Configuring ML2 VXLAN Plug-in with EVPN . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Configuring ML2 Driver with Link Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . 33
Configuring ML2 Driver with Multi-Chassis Link Aggregation . . . . . . . . . . . . . 34
Configuring the L3 Plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Configuring the L3 VXLAN Plug-in with EVPN . . . . . . . . . . . . . . . . . . . . . . . . . 36
Configuring the Firewall-as-a-Service (FWaaS) Plug-in . . . . . . . . . . . . . . . . . . . . 38
Configuring a Dedicated Perimeter Firewall . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Configuring High Availability and Virtual Router Redundancy Protocol . . . . . 42
Configuring the VPN-as-a-Service (VPNaaS) Plug-in . . . . . . . . . . . . . . . . . . . . . . 43
Configuring OpenStack Extension for Physical Topology . . . . . . . . . . . . . . . . . . . . 44
Configuring the Static Route Extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Configuring EVPN Multi-homing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Configuring EVPN BMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Configuring Network Address Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Chapter 3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Referenced Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
iiiCopyright © 2017, Juniper Networks, Inc.
Copyright © 2017, Juniper Networks, Inc.iv
Juniper Networks OpenStack Neutron Plug-in
List of Figures
Chapter 2 Installing and Configuring the OpenStack Neurton Plug-in . . . . . . . . . . . . . 19
Figure 1: L2 Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Figure 2: L3 Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Figure 3: IP Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Figure 4: LAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Figure 5: MC-LAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Figure 6: BGP Peering between all the TORs and Router . . . . . . . . . . . . . . . . . . . . 37
Figure 7: Firewall Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Figure 8: Firewall Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Figure 9: View Physical topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Figure 10: Add Physical Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Figure 11: Add Topology from LLDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Figure 12: Edit Physical Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Figure 13: Delete a Physical Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Figure 14: Delete Multiple Physical Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Figure 15: View Static routes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Figure 16: Add a Static route . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Figure 17: Delete Static Routes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Figure 18: EVPN Multi-homing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Figure 19: EVPN BMS Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Figure 20: EVPN BMS Plugin Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Figure 21: EVPN BMS Plugin Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Figure 22: OpenStack Accessing the Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Figure 23: Accessing OpenStack from the Internet . . . . . . . . . . . . . . . . . . . . . . . . 56
vCopyright © 2017, Juniper Networks, Inc.
Copyright © 2017, Juniper Networks, Inc.vi
Juniper Networks OpenStack Neutron Plug-in
List of Tables
About the Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Table 1: Notice Icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Table 2: Text and Syntax Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Chapter 2 Installing and Configuring the OpenStack Neurton Plug-in . . . . . . . . . . . . . 19
Table 3: Juniper plug-in packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Table 4: CLI Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Chapter 3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Table 5: References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Table 6: Terminologies and Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
viiCopyright © 2017, Juniper Networks, Inc.
Copyright © 2017, Juniper Networks, Inc.viii
Juniper Networks OpenStack Neutron Plug-in
About the Documentation
• Documentation and Release Notes on page ix
• Supported Platforms on page ix
• Documentation Conventions on page ix
• Documentation Feedback on page xi
• Requesting Technical Support on page xii
Documentation and Release Notes
To obtain the most current version of all Juniper Networks®technical documentation,
see the product documentation page on the Juniper Networks website at
http://www.juniper.net/techpubs/.
If the information in the latest release notes differs from the information in the
documentation, follow the product Release Notes.
Juniper Networks Books publishes books by Juniper Networks engineers and subject
matter experts. These books go beyond the technical documentation to explore the
nuances of network architecture, deployment, and administration. The current list can
be viewed at http://www.juniper.net/books.
Supported Platforms
For the features described in this document, the following platforms are supported:
• QFabric System
• QFX Series
• EX Series
• SRX Series
• vSRX
Documentation Conventions
Table 1 on page x defines notice icons used in this guide.
ixCopyright © 2017, Juniper Networks, Inc.
Table 1: Notice Icons
DescriptionMeaningIcon
Indicates important features or instructions.Informational note
Indicates a situation that might result in loss of data or hardware damage.Caution
Alerts you to the risk of personal injury or death.Warning
Alerts you to the risk of personal injury from a laser.Laser warning
Indicates helpful information.Tip
Alerts you to a recommended use or implementation.Best practice
Table 2 on page x defines the text and syntax conventions used in this guide.
Table 2: Text and Syntax Conventions
ExamplesDescriptionConvention
To enter configuration mode, type theconfigure command:
user@host> configure
Represents text that you type.Bold text like this
user@host> show chassis alarms
No alarms currently active
Represents output that appears on theterminal screen.
Fixed-width text like this
• A policy term is a named structurethat defines match conditions andactions.
• Junos OS CLI User Guide
• RFC 1997,BGPCommunities Attribute
• Introduces or emphasizes importantnew terms.
• Identifies guide names.
• Identifies RFC and Internet draft titles.
Italic text like this
Configure themachine’s domain name:
[edit]root@# set system domain-namedomain-name
Represents variables (options for whichyou substitute a value) in commands orconfiguration statements.
Italic text like this
Copyright © 2017, Juniper Networks, Inc.x
Juniper Networks OpenStack Neutron Plug-in
Table 2: Text and Syntax Conventions (continued)
ExamplesDescriptionConvention
• To configure a stub area, include thestub statement at the [edit protocolsospf area area-id] hierarchy level.
• Theconsoleport is labeledCONSOLE.
Represents names of configurationstatements, commands, files, anddirectories; configurationhierarchy levels;or labels on routing platformcomponents.
Text like this
stub <default-metricmetric>;Encloses optional keywords or variables.< > (angle brackets)
broadcast | multicast
(string1 | string2 | string3)
Indicates a choice between themutuallyexclusive keywords or variables on eitherside of the symbol. The set of choices isoften enclosed in parentheses for clarity.
| (pipe symbol)
rsvp { # Required for dynamicMPLS onlyIndicates a comment specified on thesame lineas theconfiguration statementto which it applies.
# (pound sign)
community namemembers [community-ids ]
Encloses a variable for which you cansubstitute one or more values.
[ ] (square brackets)
[edit]routing-options {
static {route default {
nexthop address;retain;
}}
}
Identifies a level in the configurationhierarchy.
Indention and braces ( { } )
Identifies a leaf statement at aconfiguration hierarchy level.
; (semicolon)
GUI Conventions
• In the Logical Interfaces box, selectAll Interfaces.
• To cancel the configuration, clickCancel.
Representsgraphicaluser interface(GUI)items you click or select.
Bold text like this
In the configuration editor hierarchy,select Protocols>Ospf.
Separates levels in a hierarchy of menuselections.
> (bold right angle bracket)
Documentation Feedback
We encourage you to provide feedback, comments, and suggestions so that we can
improve the documentation. You can provide feedback by using either of the following
methods:
• Online feedback rating system—On any page of the Juniper Networks TechLibrary site
athttp://www.juniper.net/techpubs/index.html, simply click the stars to rate thecontent,
and use the pop-up form to provide us with information about your experience.
Alternately, you can use the online feedback form at
http://www.juniper.net/techpubs/feedback/.
xiCopyright © 2017, Juniper Networks, Inc.
About the Documentation
• E-mail—Sendyourcommentsto [email protected]. Includethedocument
or topic name, URL or page number, and software version (if applicable).
Requesting Technical Support
Technical product support is available through the JuniperNetworksTechnicalAssistance
Center (JTAC). If you are a customer with an active J-Care or Partner Support Service
support contract, or are covered under warranty, and need post-sales technical support,
you can access our tools and resources online or open a case with JTAC.
• JTAC policies—For a complete understanding of our JTAC procedures and policies,
review the JTAC User Guide located at
http://www.juniper.net/us/en/local/pdf/resource-guides/7100059-en.pdf.
• Product warranties—For product warranty information, visit
http://www.juniper.net/support/warranty/.
• JTAC hours of operation—The JTAC centers have resources available 24 hours a day,
7 days a week, 365 days a year.
Self-Help Online Tools and Resources
For quick and easy problem resolution, Juniper Networks has designed an online
self-service portal called the Customer Support Center (CSC) that provides youwith the
following features:
• Find CSC offerings: http://www.juniper.net/customers/support/
• Search for known bugs: https://prsearch.juniper.net/
• Find product documentation: http://www.juniper.net/documentation/
• Find solutions and answer questions using our Knowledge Base: http://kb.juniper.net/
• Download the latest versions of software and review release notes:
http://www.juniper.net/customers/csc/software/
• Search technical bulletins for relevant hardware and software notifications:
http://kb.juniper.net/InfoCenter/
• Join and participate in the Juniper Networks Community Forum:
http://www.juniper.net/company/communities/
• Open a case online in the CSC Case Management tool: http://www.juniper.net/cm/
Toverify serviceentitlementbyproduct serial number, useourSerialNumberEntitlement
(SNE) Tool: https://entitlementsearch.juniper.net/entitlementsearch/
Opening a Casewith JTAC
You can open a case with JTAC on theWeb or by telephone.
• Use the Case Management tool in the CSC at http://www.juniper.net/cm/.
• Call 1-888-314-JTAC (1-888-314-5822 toll-free in the USA, Canada, and Mexico).
Copyright © 2017, Juniper Networks, Inc.xii
Juniper Networks OpenStack Neutron Plug-in
For international or direct-dial options in countries without toll-free numbers, see
http://www.juniper.net/support/requesting-support.html.
xiiiCopyright © 2017, Juniper Networks, Inc.
About the Documentation
Copyright © 2017, Juniper Networks, Inc.xiv
Juniper Networks OpenStack Neutron Plug-in
CHAPTER 1
Overview
• What's New in Juniper OpenStack Neutron Plug-in on page 15
• Overview of the OpenStack Neutron Plug-in on page 15
• System Requirements on page 17
What's New in Juniper OpenStack Neutron Plug-in
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
The following features are introduced for the OpenStack Neutron plug-in version 3.0
release:
• VPN-as-a-Service (VPNaaS) support
• Virtual Extensible LAN L3 Routing with Ethernet VPN (VXLAN L3 Routing with EVPN)
• EVPNMulti-homing
• EVPN Bare Metal Server (BMS) support
• Source Network Address Translation (SNAT) and Destination Network Address
Translation (DNAT) support
RelatedDocumentation
Configuring the VPN-as-a-Service (VPNaaS) Plug-in on page 43•
• Configuring EVPNMulti-homing on page 50
• Configuring EVPN BMS on page 52
• Configuring Network Address Translation on page 54
Overview of the OpenStack Neutron Plug-in
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
15Copyright © 2017, Juniper Networks, Inc.
OpenStack is a cloud operating system, which builds public, private, and hybrid clouds
byusingcommodityhardware. It offershighperformanceandthroughput.Variousnetwork
vendors who are specialized in networking gear have utilized the plug-in mechanism
offered by Neutron and havemoved out the L2, L3, Firewall, VPN, and Load balancing
services on to their respective networking devices.
Juniper Networks provides OpenStack Neutron plug-ins, which enable integration and
orchestration of EX, MX, QFX, and SRX devices in the customer network.
NOTE: The Openstack Neutron plug-in has not been tested in a nestedvirtualization environment.
Types of OpenStack Neutron Plug-in
Neutron plug-ins are categorized as follows:
• Core plug-ins - Implement the core API of the neutron, which consists of networking
building blocks such as port, subnet, and network.
Juniper provides the following core plug-ins:
• ML2 VLAN plug-in
• ML2 VXLAN plug-in with EVPN
• Service plug-ins - Implement the neutron API extensions such as L3 router and L4-L7
services such as FWaaS, LBaaS, and VPNaaS.
Juniper provides the following service plug-ins:
• Juniper L3 plug-in
• FWaaS plug-in
• VPNaaS plug-in
RelatedDocumentation
Configuring the ML2 Plug-in on page 30•
• Configuring the L3 Plug-in on page 35
• Configuring the Firewall-as-a-Service (FWaaS) Plug-in on page 38
• Configuring the VPN-as-a-Service (VPNaaS) Plug-in on page 43
• Configuring OpenStack Extension for Physical Topology on page 44
• Configuring the Static Route Extension on page 48
• Configuring EVPNMulti-homing on page 50
• Configuring EVPN BMS on page 52
• Configuring Network Address Translation on page 54
Copyright © 2017, Juniper Networks, Inc.16
Juniper Networks OpenStack Neutron Plug-in
SystemRequirements
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
To use the Juniper Neutron plug-ins, the following are the system requirements:
• OpenStack Releases - Liberty and Mitaka.
• Operating Systems - Ubuntu 14 and Centos 7
• Juniper Network Devices - EX, QFX, SRX, and vSRX series
• Python version 2.7
• External Libraries - ncclient python library
17Copyright © 2017, Juniper Networks, Inc.
Chapter 1: Overview
Copyright © 2017, Juniper Networks, Inc.18
Juniper Networks OpenStack Neutron Plug-in
CHAPTER 2
Installing and Configuring the OpenStackNeurton Plug-in
• Installing the OpenStack Neutron Plug-in on page 19
• Setting Up L2 and L3 Topologies on page 21
• Configuring the ML2 Plug-in on page 30
• Configuring the L3 Plug-in on page 35
• Configuring the Firewall-as-a-Service (FWaaS) Plug-in on page 38
• Configuring the VPN-as-a-Service (VPNaaS) Plug-in on page 43
• Configuring OpenStack Extension for Physical Topology on page 44
• Configuring the Static Route Extension on page 48
• Configuring EVPNMulti-homing on page 50
• Configuring EVPN BMS on page 52
• Configuring Network Address Translation on page 54
Installing the OpenStack Neutron Plug-in
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
Juniper plug-inbinariesareavailableasRPMforCentOSandasDebpackages forUbuntu.
The plug-in must be installed only on the OpenStack Controller Node, where the
OpenStack Neutron server is running.
Complete the following steps to install plug-ins on the appropriate operating system:
1. Download the package from
https://www.juniper.net/support/downloads/?p=qpluginopen#sw.
2. Extract the binaries using the tar -xvf juniper_plugins_version.tgz command. The
extracted folder contains the packages for Centos and Ubuntu.
The plug-ins are provided as a set of packages. All Neutron drivers and service plug-ins
are provided as a single package and can be installed as follows.
• CentOS
19Copyright © 2017, Juniper Networks, Inc.
rpm -ivh juniper_plugins_version/CentOS/neutron-plugin-juniper-version.noarch.rpm
• Ubuntu
sudo dpkg –ijuniper_plugins_version/Ubuntu/python-neutron-plugin-juniper_version_all.deb
Other software packages provide features such as Horizon GUI extensions, physical
topology plug-in, and a Neutron client extension to support physical topology APIs. You
can install these software packages in a similar manner.
The GUI packages and OpenStack Neutron client extension must be installed on the
server that hosts Horizon.
The downloaded package also has an installation script, install.sh. This script is used to
install the required plugins.
• Installation Scripts for Juniper Plug-in on page 20
Installation Scripts for Juniper Plug-in
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
Installation script for Juniper OpenStack extensions is an interactive tool that explains
the installation of Juniper OpenStack Neutron plug-ins.
Before running the installation script, ensure the following pre-requisites are met:
• Password-less SSH authentication is enabled between controller and all other nodes
of OpenStack.
• keystonerc_admin is present in the home directory.
• Installation is done from the controller node on which Neutron server is running.
Table 3 on page 20 shows the available Juniper plug-in packages:
Table 3: Juniper plug-in packages
FunctionPlug-in Package
Provides Physical Networks dashboard for Administrator.Horizon physical topology plug-in
Provides Static Route with preference dashboard.Horizon static route plug-in
Provides BMS dashboard.Horizon BMS plug-in
Provides Neutron ML2 extension and service plug-ins.Neutron plug-in
OpenStack Neutron plug-in for FWaaS. It supports both SRX and vSRXdevices.
Neutron FWaaS plug-in
Copyright © 2017, Juniper Networks, Inc.20
Juniper Networks OpenStack Neutron Plug-in
Table 3: Juniper plug-in packages (continued)
FunctionPlug-in Package
OpenStack Neutron plug-in for VPNaaS. It supports both SRX and vSRXdevices.
Neutron VPNaaS plug-in
Provides Neutron CLI for physical topologyNeutron-client plug-in
The Juniper plug-in packages are classified into following categories based on the
functionality of the package:
• Neutronserverplug-inpackages -Neutronplug-in,NeutronFWaaSplug-in, andNeutron
VPNaaS plug-in
OpenStack Neutron server plug-in packages are installed on OpenStack controller
node, on which the Neutron server runs.
• User Interface packages - Horizon extensions and CLI extensions
User Interface packages are installed on the node running Horizon, and CLI package is
installed on any node on which the Neutron-client is installed.
When youare promptedduring the installation, enter the required information and install
the plug-in appropriately.
Setting Up L2 and L3 Topologies
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
Figure 1 on page 21 illustrates the L2 topology.
Figure 1: L2 Topologyg0
4358
0
Hypervisor 1 Hypervisor 2 Hypervisor 3 Hypervisor 4
ge-0/0/20ge-0/0/10
ge-0/0/2 ge-0/0/1
ge-0/0/2
QFX5100
QFX5100
AggregationSwitch
ML2 (VLAN)Plugin
ge-0/0/30ge-0/0/20
ge-0/0/1
QFX5100
Figure 2 on page 22 shows switches such as Switch1, Switch2, and Aggregation Switch,
which you can configure to set up the topology.
To set up a topology, complete the following steps:
1. On the Aggregation Switch, set ge-0/0/1 and ge-0/0/2 interfaces to Trunk All mode.
21Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
2. On Switch 1, set ge-0/0/2 interface to Trunk All mode and enable vlan-tagging on
ge-0/0/10 and ge-0/0/20 interfaces.
3. On Switch 2, set ge-0/0/1 interface to Trunk All mode and enable vlan-tagging on
ge-0/0/30 and ge-0/0/20 interfaces.
Figure 2: L3 Topology
g043
581
Hypervisor 1 Hypervisor 2 Hypervisor 5 Hypervisor 6
ge-0/0/20ge-0/0/10
ge-0/0/2 ge-0/0/1
ge-0/0/5
ge-0/0/2
Switch 1QFX5100
QFX5100Aggregation
Switch
SRX/vSRX
ML2 (VLAN)Plugin
ge-0/0/30ge-0/0/20
ge-0/0/1
ge-0/0/45
ge-0/0/10
Switch 2QFX5100
10001001
L3 Plugin FWaaS PluginFWZones
NetworkNode
NOTE: In Figure 2 on page 22, the SRX acts as both a router as well as afirewall.
• Setting Up the Physical Topology on page 22
Setting Up the Physical Topology
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
Copyright © 2017, Juniper Networks, Inc.22
Juniper Networks OpenStack Neutron Plug-in
To set up the physical topology:
1. To configure the Juniper Networks plug-in with the reference physical topology, use
the commands listed in Table 4 on page 23:
Juniper Neutron plug-ins include CLI tools, which enable the administrator to define
the network topology. The plug-ins depend on the topology definition to carry out
network orchestration.
Table 4: CLI Tools
DescriptionName
Add device detailsjnpr_device
Add amapping between physical network alias (ex: Physnet1) to the correspondingethernet interface on the node.
jnpr_nic_mapping
Add amapping between the compute or network Node and its Ethernet Interface tothe switch and the port that it is connected to.
jnpr_switchport_mapping
Define the downlink port of the router or firewall on which routed VLAN interface(RVI)RVI for each tenant VLAN is created.
jnpr_device_port
Define allocation of router and firewall to a tenant or group of tenants.jnpr_allocate_device
Define the VRRP pool.jnpr_vrrp_pool
2. Toadddevices to the topology, run the followingcommandon theOpenStackNeutron
controller:
NOTE: Use a login credential with super-user class privilege on the devicesthat are added to the topology.
admin@controller:~$ jnpr_device add -d device-name or device-IP-address -c{switch, router, firewall} -u username -p device-password
3. To add and view switches that are added to the topology, run the following command
on the OpenStack Neutron controller:
NOTE: In the physical topology, Switch1 and Switch2 are connected tothe hypervisors.
a. To add Switch1 to the topology, run the following command on the OpenStack
Neutron controller::
admin@controller:~$ jnpr_device add -d switch1.juniper.net -c switch -u root-p root-password
23Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
+---------------------+---------------+-------------+---------+-------+------+---------------+| Device | Ip | Device Type | model | login | vtep | vrrp_priority |+---------------------+---------------+-------------+---------+-------+------+---------------+| switch1.juniper.net | 10.107.52.136 | switch | qfx3500 | root | 0 | 0 |+---------------------+---------------+-------------+---------+-------+------+---------------+
b. To add Switch2 to the topology, run the following command on the OpenStack
Neutron controller:
admin@controller:~$ jnpr_device add -d switch2.juniper.net -c switch -u root-p password+---------------------+---------------+-------------+---------+-------+------+---------------+| Device | Ip | Device Type | model | login | vtep | vrrp_priority |+---------------------+---------------+-------------+---------+-------+------+---------------+| switch2.juniper.net | 10.107.52.137 | switch | qfx3500 | root | 0 | 0 |+---------------------+---------------+-------------+---------+-------+------+---------------+
c. To view the switches that are added to the topology, run the following command
on the OpenStack Neutron controller:
admin@controller:~$ jnpr_device list+---------------------+---------------+-------------+---------+-------+------+---------------+| Device | Ip | Device Type | model | login | vtep | vrrp_priority |+---------------------+---------------+-------------+---------+-------+------+---------------+| switch1.juniper.net | 10.107.52.136 | switch | qfx3500 | root | 0 | 0 || switch2.juniper.net | 10.107.52.137 | switch | qfx3500 | root | 0 | 0 |+---------------------+---------------+-------------+---------+-------+------+---------------+
4. Toadd routers to the topology, run the followingcommandon theOpenStackNeutron
controller:
NOTE: In the physical topology shown in Figure 2 on page 22, the SRXacts as both a router as well as a firewall.
admin@controller:~$ jnpr_device add -d srx.juniper.net -c router -u root -ppassword+-----------------+---------------+-------------+-------+-------+------+---------------+| Device | Ip | Device Type | model | login | vtep | vrrp_priority |+-----------------+---------------+-------------+-------+-------+------+---------------+| srx.juniper.net | 10.107.23.103 | router | srx | root | 0 | 0 |+-----------------+---------------+-------------+-------+-------+------+---------------+
5. Toadd firewall to the topology, run the followingcommandon theOpenStackNeutron
controller:
admin@controller:~$ jnpr_device add -d srx.juniper.net -c firewall -u root -ppassword
Copyright © 2017, Juniper Networks, Inc.24
Juniper Networks OpenStack Neutron Plug-in
+-----------------+---------------+-------------+-------+-------+------+---------------+| Device | Ip | Device Type | model | login | vtep | vrrp_priority |+-----------------+---------------+-------------+-------+-------+------+---------------+| srx.juniper.net | 10.107.23.103 | firewall | srx | root | 0 | 0 || srx.juniper.net | 10.107.23.103 | router | srx | root | 0 | 0 |+-----------------+---------------+-------------+-------+-------+------+---------------+
6. Define the NIC to physical network mapping for each hypervisor.
InOpenStack, yougenerally defineanalias for thephysical network and its associated
bridge by using the following configuration in /etc/neutron/plugins/ml2/ml2_conf.ini
file on the network node and all the compute nodes:
[ovs] tenant_network_type = vlan bridge_mappings = physnet1:br-eth1
Because you can connect the bridge br-eth1 to any physical interface, youmust add
the link between the bridge br-eth1 and the physical interface to the topology by
entering following command:
admin@controller:~$ jnpr_nic-mapping add -H compute-hostname -bphysical-network-alias-name -n NIC
a. To addHypervisor 1 to the topology, run the following commandon theOpenStack
Neutron controller:
admin@controller:~$ jnpr_nic_mapping add -H hypervisor1.juniper.net -bphysnet1 -n eth1
Adding mapping
+---------------+------------+------+| Host | BridgeName | Nic |+---------------+------------+------+| 10.107.65.101 | physnet1 | eth1 |+---------------+------------+------+
b. ToaddHypervisor 2 to the topology, run the following commandon theOpenStack
Neutron controller:
admin@controller:~$ jnpr_nic_mapping add -H hypervisor2.juniper.net -bphysnet1 -n eth1
Adding mapping
+---------------+------------+------+| Host | BridgeName | Nic |+---------------+------------+------+| 10.107.65.102 | physnet1 | eth1 |+---------------+------------+------+
c. ToaddHypervisor 5 to the topology, run the following commandon theOpenStack
Neutron controller:
25Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
NOTE: Hypervisor 5 is mapped to physnet1-- br-eth1 -- eth2.
admin@controller:~$ jnpr_nic_mapping add -H hypervisor5.juniper.net -bphysnet1 -n eth2
Adding mapping
+---------------+------------+------+| Host | BridgeName | Nic |+---------------+------------+------+| 10.107.65.105 | physnet1 | eth2 |+---------------+------------+------+
d. ToaddHypervisor6 to the topology, run the followingcommandon theOpenStack
Neutron controller:
admin@controller:~$ jnpr_nic_mapping add -H hypervisor6.juniper.net -bphysnet1 -n eth1
Adding mapping
+---------------+------------+------+| Host | BridgeName | Nic |+---------------+------------+------+| 10.107.65.106 | physnet1 | eth1 |+---------------+------------+------+
e. Toaddnetworknode to the topology, run the followingcommandontheOpenStack
Neutron controller:
admin@controller:~$ jnpr_nic_mapping add -H networknode.juniper.net -bphysnet1 -n eth1
Adding mapping
+---------------+------------+------+| Host | BridgeName | Nic |+---------------+------------+------+| 10.108.10.100 | physnet1 | eth1 |+---------------+------------+------+
f. To view all the mappings, run the following command on the OpenStack Neutron
controller:
admin@controller:~$ jnpr_nic_mapping list+---------------+------------+------+| Host | BridgeName | Nic |+---------------+------------+------+| 10.107.65.101 | physnet1 | eth1 || 10.107.65.102 | physnet1 | eth1 || 10.107.65.105 | physnet1 | eth2 || 10.107.65.106 | physnet1 | eth1 || 10.108.10.100 | physnet1 | eth1 |+---------------+------------+------+
7. Define the mapping from the compute to the switch.
Copyright © 2017, Juniper Networks, Inc.26
Juniper Networks OpenStack Neutron Plug-in
To configure the VLANs on the switches, the ML2 plug-in must determine the port of
the switch on which the Hypervisor is connected through its Ethernet interface. This
provides the plug-in an overall view of the topology between physnet1 -- br-eth1 --
eth1 -- Switch-x: ge-0/0/x. You can determine this information by either enabling
LLDP, or by configuring it by using the following command:
admin@controller:~$ jnpr_switchport_mapping add -H compute-hostname -n NIC -sswitch-IP-address or switch-name -p switch-port
a. Tomap Hypervisor 1 to Switch 1, run the following command on the OpenStack
Neutron controller:
admin@controller:~$ jnpr_switchport_mapping add -H hypervisor1.juniper.net-n eth1 -s switch1.juniper.net -p ge/0/0/10Database updated with switch port binding
+---------------+------+---------------+-----------+-----------+| Host | Nic | Switch | Port | Aggregate |+---------------+------+---------------+-----------+-----------+| 10.107.65.101 | eth1 | 10.107.52.136 | ge/0/0/10 | |+---------------+------+---------------+-----------+-----------+
b. Tomap Hypervisor 2 to Switch 1, run the following command on the OpenStack
Neutron controller:
admin@controller:~$ jnpr_switchport_mapping add -H hypervisor2.juniper.net-n eth1 -s switch1.juniper.net -p ge/0/0/20Database updated with switch port binding
+---------------+------+---------------+-----------+-----------+| Host | Nic | Switch | Port | Aggregate |+---------------+------+---------------+-----------+-----------+| 10.107.65.102 | eth1 | 10.107.52.136 | ge/0/0/20 | |+---------------+------+---------------+-----------+-----------+
c. Tomap Hypervisor 5 to Switch 2, run the following command on the OpenStack
Neutron controller:
admin@controller:~$ jnpr_switchport_mapping add -H hypervisor5.juniper.net-n eth2 -s switch2.juniper.net -p ge/0/0/20
Database updated with switch port binding
+---------------+------+---------------+-----------+-----------+| Host | Nic | Switch | Port | Aggregate |+---------------+------+---------------+-----------+-----------+| 10.107.65.105 | eth2 | 10.107.52.137 | ge/0/0/20 | |+---------------+------+---------------+-----------+-----------+
d. Tomap Hypervisor 6 to Switch 2, run the following command on the OpenStack
Neutron controller:
admin@controller:~$ jnpr_switchport_mapping add -H hypervisor6.juniper.net-n eth1 -s switch2.juniper.net -p ge/0/0/30
Database updated with switch port binding
+---------------+------+---------------+-----------+-----------+| Host | Nic | Switch | Port | Aggregate |
27Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
+---------------+------+---------------+-----------+-----------+| 10.107.65.106 | eth1 | 10.107.52.137 | ge/0/0/30 | |+---------------+------+---------------+-----------+-----------+
e. TomapNetwork Node to Switch 2, run the following command on the OpenStack
Neutron controller:
admin@controller:~$ jnpr_switchport_mapping add -H networknode.juniper.net-n eth1 -s switch2.juniper.net -p ge/0/0/5
Database updated with switch port binding
+---------------+------+---------------+----------+-----------+| Host | Nic | Switch | Port | Aggregate |+---------------+------+---------------+----------+-----------+| 10.108.10.100 | eth1 | 10.107.52.137 | ge/0/0/5 | |+---------------+------+---------------+----------+-----------+
f. To list all mappings, run the following command on the OpenStack Neutron
controller:
admin@controller:~$ jnpr_switchport_mapping list+---------------+------+---------------+-----------+-----------+| Host | Nic | Switch | Port | Aggregate |+---------------+------+---------------+-----------+-----------+| 10.107.65.101 | eth1 | 10.107.52.136 | ge/0/0/10 | || 10.107.65.102 | eth1 | 10.107.52.136 | ge/0/0/20 | || 10.107.65.105 | eth2 | 10.107.52.137 | ge/0/0/20 | || 10.107.65.106 | eth1 | 10.107.52.137 | ge/0/0/30 | || 10.108.10.100 | eth1 | 10.107.52.137 | ge/0/0/5 | |+---------------+------+---------------+-----------+-----------+
8. Define the downlink port on the SRX device (Router) on which the RVI is created by
the plug-in.
You can update the plug-in database with the port on the SRX device to which the
Aggregation Switch is connected by using the following command:
admin@controller:~$ jnpr_device_port -d SRX-device-name or switch-IP -psrx-port-name -t port_type: Downlink
a. To add the downlink port of the SRX device to the topology, run the following
command on the OpenStack Neutron controller:
admin@controller:~$ jnpr_device_port add -d srx.juniper.net -p ge-0/0/10 -tDownlink+---------------+-----------+---------------+| Device | port | port_type |+---------------+-----------+---------------+| 10.107.23.103 | ge-0/0/10 | downlink_port |+---------------+-----------+---------------+
9. Create a VRRP pool
Copyright © 2017, Juniper Networks, Inc.28
Juniper Networks OpenStack Neutron Plug-in
The L3 plug-in supports high availability via VRRP. In order to use this functionality,
youmust createaVRRPpool andassignonly oneof thedevices in thepool to a tenant
using the jnpr_allocate_device command.
Complete the following steps to create a VRRP pool:
a. To add routers, run the following command on the OpenStack Neutron controller:
admin@controller:~$ jnpr_device add -d 10.20.30.40 -c router -u root -ppassword+-----------------+---------------+-------------+-------+-------+------+---------------+| Device | Ip | Device Type | model | login | vtep | vrrp_priority |+-----------------+---------------+-------------+-------+-------+------+---------------+| 10.20.30.40 | 10.20.30.40 | router | srx | root | 0 | 0 |+-----------------+---------------+-------------+-------+-------+------+---------------+
admin@controller:~$ jnpr_device add -d 10.20.30.41 -c router -u root -ppassword+-----------------+---------------+-------------+-------+-------+------+---------------+| Device | Ip | Device Type | model | login | vtep | vrrp_priority |+-----------------+---------------+-------------+-------+-------+------+---------------+| 10.20.30.41 | 10.20.30.41 | router | srx | root | 0 | 0 |+-----------------+---------------+-------------+-------+-------+------+---------------+
b. To create VRRP pools, run the following command on the OpenStack Neutron
controller:
admin@controller:~$ jnpr_vrrp_pool add –d 10.20.30.40 –p tenant1_pool1+----------------------------------+-----------------+| Device ID | VRRP POOL NAME |+----------------------------------+-----------------+| 10.20.30.40 | tenant1_pool1 |+----------------------------------+-----------------+
admin@controller:~$ jnpr_vrrp_pool add –d 10.20.30.41 –p tenant1_pool1+----------------------------------+-----------------+| Device ID | VRRP POOL NAME |+----------------------------------+-----------------+| 10.20.30.41 | tenant1_pool1 |+----------------------------------+-----------------+
admin@controller:~$ jnpr_vrrp_pool list+---------------+----------------+| Device ID | VRRP POOL NAME |+---------------+----------------+| 10.20.30.40 | tenant1_pool1 || 10.20.30.41 | tenant1_pool1 |+---------------+----------------+
c. To define allocation of devices to a tenant or a group of tenants, run the following
command on the OpenStack Neutron controller:
admin@controller:~$ jnpr_allocate_device add –t tenant-project_id -ddevice-hostname-or-IP-addressadmin@controller:~$ jnpr_allocate_device add –te0d6c7d2e25943c1b4460a4f471c033f –d 10.20.30.40
29Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
+----------------------------------+---------------+| Tenant ID | Device IP |+----------------------------------+---------------+| e0d6c7d2e25943c1b4460a4f471c033f | 10.20.30.40 |+----------------------------------+---------------+
To use a device as the default device for multiple tenants, run the following
command on the OpenStack Neutron controller and set the tenant as default. For
example:
admin@controller:~$ jnpr_allocate_device add –t default –d 10.20.30.40+----------------------------------+---------------+| Tenant ID | Device IP |+----------------------------------+---------------+| default | 10.20.30.40 |+----------------------------------+---------------+
Configuring theML2 Plug-in
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
Juniper ML2 drivers support the following types of virtual networks:
• VLAN based network
• VXLAN based tunneled network with EVPN by using Hierarchical Port Binding
By using the features such as LAG and MC-LAG, ML2 drivers support the orchestration
of aggregated links that connect the OpenStack nodes to the ToR switches.
• Configuring the ML2 VLAN Plug-in on page 30
• Configuring ML2 VXLAN Plug-in with EVPN on page 31
• Configuring ML2 Driver with Link Aggregation on page 33
• Configuring ML2 Driver with Multi-Chassis Link Aggregation on page 34
Configuring theML2 VLAN Plug-in
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
Juniper ML2 VLAN plug-in supports configuring VLAN of the network of each tenant on
the corresponding switch port attached to the compute node. VMmigration is supported
fromOpenStack Neutron version 2.7.1 onwards.
• Supported Devices
EX series and QFX series devices.
• Plug-in Configuration
Copyright © 2017, Juniper Networks, Inc.30
Juniper Networks OpenStack Neutron Plug-in
To configure OpenStack Neutron to use VLAN type driver:
1. On the OpenStack Controller, open the file /etc/neutron/neutron.conf and update
as follows:
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin or core_plugin =
neutron.plugins.ml2.plugin_pt_ext.Ml2PluginPtExt
2. On Openstack Controller, update the ML2 configuration file
/etc/neutron/plugins/ml2/ml2_conf.ini to set JuniperML2plug-in as themechanism
driver:
[ml2]type_drivers = vlanmechanism_drivers = openvswitch,junipertenant_network_types = vlan
3. Specify the VLAN range and the physical network alias to be used.
[ml2_type_vlan]network_vlan_ranges = physnet1:1001:1200
4. Restart the OpenStack neutron server for the changes to take effect.
5. Login to OpenStack GUI and create VLAN based network and launch VMs. You can
viewtheVLAN IDsof theOpenStacknetworkarecreatedon theswitchandmapped
to the interfaces thatareconfigured through the jnpr_switchport_mappingcommand.
ConfiguringML2 VXLAN Plug-in with EVPN
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
ML2 EVPN driver is based on Neutron hierarchical port binding design. It configures the
ToR switches as VXLAN endpoints (VTEPs), which is used to extend VLAN based L2
domain across routed networks.
Figure 3: IP Fabric
QFX5100QFX5100 QFX5100
g043
582
BGP Peers
ge-0/0/0 ge-0/0/0ge-0/0/0
eth1
br-eth1
br-int
Network 1
eth1
br-eth1
br-int
Compute 1
eth1
br-eth1
br-int
Compute 2
ToR3ToR1 ToR2
31Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
ToprovideL2connectivitybetween thenetworkportsandcomputenodes, theL2packets
are Tagged with VLANs and sent to the Top of Rack (TOR) switch. The VLANs used to
tag the packets are only locally significant to the TOR switch (Switch Local VLAN).
At the ToR switch, the Switch Local VLAN is mapped into a global VXLAN ID. The L2
packets are encapsulated into VXLAN packets and sent to the virtual tunnel endpoint
(VTEP) on the destination node, where they are de-encapsulated and sent to the
destination VM.
Tomake the L2 connectivity between the endpoints work with VXLAN, each endpoint
must be informed about the presence of destination VM and VTEP. EVPN uses a
BGP-based control plane to learn this information. The plug-in assumes that the ToRs
are setup with BGP-based peering.
Refer to Junos documentation for configuring BGP on the ToR switches available at BGP
Configuration Overview.
Supported Devices
QFX 5100 only
Plug-in Configuration
To install the Juniper neutron plug-in on the neutron server node, complete the following
steps:
1. Edit the ML2 configuration file /etc/neutron/plug-ins/ml2/ml2_conf.ini to add the
following configuration for EVPN driver:
[ml2]type_drivers = vlan,vxlan,evpntenant_network_types = vxlanmechanism_drivers = jnpr_evpn,openvswitch
[ml2_type_vlan]network_vlan_ranges=<ToR_MGMT_IP_SWITCH1>:<vlan-start>:<vlan-end>,<ToR_MGMT_IP_SWITCH2>:<vlan-start>:<vlan-end>,<ToR_MGMT_IP_SWITCH3>:<vlan-start>:<vlan-end>
[ml2_type_vxlan]vni_ranges = <vni-start>:<vni-end>
2. Restart the OpenStack neutron server to load the EVPNML2 driver.
3. Update the plug-in topology database. To add a ToR switch and two compute nodes
connected to it, run the following commands on the neutron server node:
admin@controller:~$ jnpr_device add -d ToR1 -u root -p root-password -c switchadmin@controller:~$ jnpr_switchport_mapping add -H Compute1 -n eth1 -stor1_mgmt_ip-address -p ge-0/0/1admin@controller:~$ jnpr_switchport_mapping add -H Compute2 -n eth1 -stor2_mgmt_ip-address -p ge-0/0/1admin@controller:~$ jnpr_switchport_mapping add -H Network1 -n eth1 -stor3_mgmt_ip-address -p ge-0/0/1
Copyright © 2017, Juniper Networks, Inc.32
Juniper Networks OpenStack Neutron Plug-in
4. Update OVS L2 agent on all compute and network nodes.
All the compute and network nodes needs to be updated with the bridge mapping.
The bridge name should be the IP address of the ToR switch.
5. Edit the file /etc/neutron/plug-ins/ml2/ml2_conf.ini (ubuntu) or
/etc/neutron/plug-ins/ml2/openvswitch_agent.ini (centos) to add the following:
[ovs]bridge_mappings = <ToR_MGMT_IP>:br-eth1
Here br-eth1 is anOVSbridgewhich enslaves the eth1 physical port on theOpenStack
node connected to ToR1. It provides the physical network connectivity for tenant
networks.
6. Restart the OVS agent on all the compute and network nodes.
7. Login to OpenStack GUI to create a virtual network and launch VMs. You can view
the VXLAN IDs of the OpenStack network, where the switch local VLANs are created
on the switch andmapped to the VXLAN ID on each ToR.
ConfiguringML2 Driver with Link Aggregation
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
YoucanuseLinkAggregation (LAG)betweenanOpenStack computenodeanda Juniper
switch to improve network resiliency.
Plug-in Configuration
Figure 4 on page 33 describes the connectivity of OpenStack compute node to the Top
of Rack (TOR) switch.
Figure 4: LAG
g043
583
LAG
OpenStackNode
JuniperSwitch
To configure LAG on Juniper ToR switches, refer to the following link:
Configuring Aggregated Ethernet Links (CLI Procedure)
To configure LAG on the OpenStack compute node:
1. Connect the data ports of the OpenStack networking to the LAG interface on the
Juniper switches. For example, connect eth1 and eth2 for LAG as follows:
admin@controller:~$ ovs-vsctl add-bond br-eth1 bond0 eth1 eth2 lacp=active
2. Create NICmapping:
33Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
admin@controller:~$ jnpr_nic_mapping add –H openstack-node-name-or-ip-address–b physnet1 -n nicadmin@controller:~$ jnpr_nic_mapping add –H openstack-node-name-or-ip-address–b physnet1 -n nicadmin@controller:~$ jnpr_nic_mapping add -H 10.207.67.144 -b physnet1 -n eth1admin@controller:~$ jnpr_nic_mapping add -H 10.207.67.144 -b physnet1 -n eth2
3. Add switch port mapping with aggregate interface:
admin@controller:~$ jnpr_switchport_mapping add –Hopenstack-node-name-or-ip-address –n eth1 -s switch-name-or-ip-address –p port–a lag-nameadmin@controller:~$ jnpr_switchport_mapping add –Hopenstack-node-name-or-ip-address –n eth2 -s switch-name-or-ip-address –p port–a lag-nameadmin@controller:~$ jnpr_switchport_mapping add -H 10.207.67.144 -n eth1 -sdc-nm-qfx3500-b -p ge-0/0/2 -a ae0admin@controller:~$ jnpr_switchport_mapping add -H 10.207.67.144 -n eth2 -sdc-nm-qfx3500-b -p ge-0/0/3 -a ae0
4. Verify switch port mapping with aggregate details:
admin@controller:~$ jnpr_switchport_mapping list+---------------+--------+-----------------+-----------+-----------+| Host | Nic | Switch | Port | Aggregate |+---------------+--------+-----------------+-----------+-----------+| 10.207.67.144 | eth1 | dc-nm-qfx3500-b | ge-0/0/2 | ae0 || 10.207.67.144 | eth2 | dc-nm-qfx3500-b | ge-0/0/3 | ae0 |+---------------+--------+-----------------+-----------+-----------+
ConfiguringML2 Driver with Multi-Chassis Link Aggregation
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
You can configure Multi-Chassis Link Aggregation (MC-LAG) and use it with Juniper
Neutron plug-ins.
Figure 5: MC-LAG
OpenStackNode
g043
584
MC-LAG
JuniperSwitch
JuniperSwitch
Plugin Configuration
To configure MC-LAG on Juniper switches, refer to the following link:
Configuring Multichassis Link Aggregation on EX Series Switches
Copyright © 2017, Juniper Networks, Inc.34
Juniper Networks OpenStack Neutron Plug-in
To configure LAG on Openstack compute:
1. Connect the data ports of the OpenStack networking to the LAG interface on two
different Juniper switches. For example, connect eth1 and eth2 for LAG as follows:
admin@controller:~$ ovs-vsctl add-bond br-eth1 bond0 eth1 eth2 lacp=active
2. Create NICmapping:
admin@controller:~$ jnpr_nic_mapping add –H openstack-node-name-or-ip-address–b physnet1 -n nicadmin@controller:~$ jnpr_nic_mapping add –H openstack-node-name-or-ip-address–b physnet1 -n nicadmin@controller:~$ jnpr_nic_mapping add -H 10.207.67.144 -b physnet1 -n eth1admin@controller:~$ jnpr_nic_mapping add -H 10.207.67.144 -b physnet1 -n eth2
3. Add switch port mapping with aggregate interface on the OpenStack controller:
admin@controller:~$ jnpr_switchport_mapping add –Hopenstack-node-name-or-ip-address –n eth1 -s switch-name-or-ip-address –p port–a lag-nameadmin@controller:~$ jnpr_switchport_mapping add –Hopenstack-node-name-or-ip-address –n eth2 -s switch-name-or-ip-address –p port–a lag-nameadmin@controller:~$ jnpr_switchport_mapping add -H 10.207.67.144 -n eth1 -sdc-nm-qfx3500-b -p ge-0/0/2 -a ae0admin@controller:~$ jnpr_switchport_mapping add -H 10.207.67.144 -n eth2 -sdc-nm-qfx3500-b -p ge-0/0/3 -a ae0
4. Verify switch port mapping with aggregate details:
admin@controller:~$ jnpr_switchport_mapping list+---------------+--------+-----------------+-----------+-----------+| Host | Nic | Switch | Port | Aggregate |+---------------+--------+-----------------+-----------+-----------+| 10.207.67.144 | eth1 | dc-nm-qfx3500-a | ge-0/0/2 | ae0 || 10.207.67.144 | eth2 | dc-nm-qfx3500-b | ge-0/0/3 | ae0 |+---------------+--------+-----------------+-----------+-----------+
Configuring the L3 Plug-in
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
Juniper L3 plug-in supports the following features:
• L3 routing
• Adding static routes
• Provides router High Availability via VRRP
• Routed external networks
Supported Devices
EX, QFX, SRX, and vSRX series devices.
35Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
Plug-in Configuration
To configure the L3 plug-in:
1. Update the configuration file /etc/neutron/neutron.conf as follows:
[DEFAULT]service_plugins = neutron.services.juniper_l3_router.dmi.l3.JuniperL3Plugin
2. Restart neutron-server for the changes the take effect.
3. Login to OpenStack GUI, create networks, and assign to a router. On the device
configuredas router, youcanview the routing instanceand the routing virtual interface
(RVI) corresponding to the networks assigned to the routing instance.
• Configuring the L3 VXLAN Plug-in with EVPN on page 36
Configuring the L3 VXLAN Plug-in with EVPN
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
The Juniper VXLAN EVPNML2 plug-in uses VXLAN tunnels along with the Neutron
hierarchal port binding design to provide L2 networks in OpenStack.
Thedefault L3serviceplug-inpresent inOpenStack implementsvirtual routerusingLinux
network name spaces.
This release of Neutron Plug-ins from Juniper Networks adds support for L3 routing for
VXLAN networks. This is done by creating a VTEP on MX and QFX10000 series devices
to convert the VXLAN to VLAN based network and configuring routing Instances to route
packets between these VLANs.
This feature works in conjunction with Juniper Networks VXLAN EVPNML2 plug-in,
whereas the VXLAN EVPNML2 plug-in is used to provide L2 connectivity, the VXLAN
EVPN L3 service plug-in provides L3 routing between the VXLAN based virtual networks.
Copyright © 2017, Juniper Networks, Inc.36
Juniper Networks OpenStack Neutron Plug-in
Figure 6: BGP Peering between all the TORs and Router
QFX5KQFX5K QFX5K
g043
585
BGP Peering between all the ToRs and Router
ge-0/0/0 ge-0/0/0ge-0/0/0
eth1
br-eth1
br-int
Network 1
eth1
br-eth1
br-int
Compute 1
eth1
br-eth1
br-int
Compute 2
ToR3ToR1 ToR2
RouterMX/QFX10K
IP FABRIC
Supported Devices
The VXLAN EVPN L3 service plug-in can orchestrate MX and QFX10K devices to provide
L3 based routing between VLAN based networks.
EVPN is supported on version 14.2R6 on MX and QFX10K.
Plug-in Configuration
The EVPN L3 plug-in depends on the VXLANEVPNML2 plug-in for orchestration of layer
2 ports. Configuration of ML2 VXLAN EVPN plug-in is a pre-requisite for this plug-in. To
configure the ML2 VXLAN EVPN plug-in , refer to Configuring ML2 VXLAN Plug-in with
EVPN.
To configure the VXLAN EVPN L3:
1. Edit the /etc/neutron/neutron.conf file and update the service plug-in definition as
follows:
[DEFAULT]…service_plugins = neutron.services.juniper_l3_router.evpn.l3.JuniperL3Plugin…
2. Add the QFX10K or MX device as a physical router to the plug-ins topology database:
admin@controller:~$ jnpr_device add -d QFX10K/MX IP -c router -u root -proot_password -t vtep_ip
3. Update the VLAN allocation range in /etc/neutron/plugins/ml2/ml2_conf.ini file and
add the per device VLAN range for the physical router. You can do this by adding the
IP address and the VLAN range of the routing device followed by the VLAN range as
follows:
37Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
[ml2_type_vlan]
network_vlan_ranges = <ToR_MGMT_IP_SWITCH1>:vlan-start:vlan-end …,…,<MGMT_IP_ROUTER>:vlan-start:vlan-end
Make sure that the VXLAN range is configured with the correct VNI range.
[ml2_type_vxlan]
tunnel_id_ranges = vxlan-start:vxlan-end
4. Verify that the EVPN L3 plug-in is functioning properly. Restart the neutron server and
create a virtual router by using the Horizon dashboard or CLI. It creates a routing
Instance on the configured routing device, QFX10K/MX.
Configuring the Firewall-as-a-Service (FWaaS) Plug-in
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
Juniper Networks Firewall-as-a-Service (FWaaS) plug-in builds on top of Juniper ML2
and L3 plug-ins. It enables Neutron to configure firewall rules and policies on SRX and
vSRX devices. In OpenStack, a tenant can create a firewall and assign a security policy
to it. A security policy is a collection of firewall rules. Figure 7 on page 38 illustrates the
relationship of firewall rules, firewall policy, and firewall.
Figure 7: Firewall Policy
g043
591Firewall Rule(s) Firewall Policy Firewall
1 1 1
Firewall Rule - Defines the source address and port(s), destination address and port(s),
protocol and the action to be taken on thematching traffic.
Firewall Policy - Collection of firewall rules
Firewall - Represents a firewall device.
When you enable a FwaaS plug-in, the SRX or vSRX should act as a router as well as a
firewall. The administrator must ensure this while setting up the topology.
Supported Devices
SRX and vSRX series devices
Plug-in Configuration
Copyright © 2017, Juniper Networks, Inc.38
Juniper Networks OpenStack Neutron Plug-in
NOTE: Before proceeding further, ensure that the following pre-requisitesare met:
• Topology is set up
• devices is added to jnpr_devices table
• compute nic → physical network alias mapping is added tojnpr_nic_mapping table.
• Compute → Switch connectivity is captured in jnpr_switchport_mappingtable (needed for L2 VLAN orchestration)
• L2 plug-in is set up. This is optional if you are using a 3rd party ML2 plug-in.
• L3 plug-in is set up to use the SRX/vSRX as the router.
To configure Neutron to use Juniper FwaaS service plug-in:
1. Update the Neutron configuration file /etc/neutron/neutron.conf file and append
service_plug-ins with the following:
service_plug-ins = neutron.services.juniper_l3_router.dmi.l3.JuniperL3Plugin, neutron_fwaas.services.juniper_fwaas.dmi.fwaas.JuniperFwaaS
2. Add firewall to the topology:
admin@controller:~$ jnpr_device add -d dns-name-or-device-ip-address -c firewall-u root-user -p root_password
3. Define the downlink trunk port on the SRX device on which the RVIs are created by
the plug-in.
Update theplug-indatabasewith theporton theSRXdevice towhich theAggregation
Switch is connected:
admin@controller:~$ jnpr_device_port -d srx-device-name-or-switch-ip-address-p port-on-the-srx -t port-type
For example:
admin@controller:~$ jnpr_device_port add -d srx1 –p ge-0/0/1 –t Downlink
4. Allocate the firewall to a tenant or as a default for all tenants:
admin@controller:~$ jnpr_allocate_device add -t project_id -d SRX/vSRX ip
To allocate the firewall as a default to all the tenants that do not have a firewall
allocated to them, use the below command:
admin@controller:~$ jnpr_allocate_device add -t default -d SRX/vSRX ip
5. Enable Horizon to show Firewall panel.
39Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
To display the Firewall panel under the Networks group in the Horizon user interface,
open /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py,
and update the following configuration:
enable_firewall: True
6. After completing the FWaaS plug-in configuration, restart the following:
• Neutron-Server
• Ubuntu – service neutron-server restart
• CentOS – systemctl restart neutron-server
• Apache (restarts Horizon)
• Ubuntu – service apache2 restart
• CentOS – systemctl restart httpd
7. From the Horizon GUI, create firewall rules and associate them to a policy. Create a
firewall and assign the routers and the firewall policy to it.
8. On the SRX, you can verify a firewall zone for each routing instance and the
corresponding policies pushed within the zones.
• Configuring a Dedicated Perimeter Firewall on page 40
• Configuring High Availability and Virtual Router Redundancy Protocol on page 42
Configuring a Dedicated Perimeter Firewall
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
Requirement of each tenant varies according to theperformanceand cost of the firewall.
For example, a dedicated firewall for better performance and compliance, a lower cost
firewall enabledbysharingnetwork resources, or a firewallwithacompleteadministrative
access to the networking device so as to leverage the advanced services provided by the
device. Tomeet different requirements of each tenant, the cloud provider must have the
provision to allocate dedicated or shared network resources to the tenants.
By using Juniper Networks FwaaS plug-in, a service provider can allocate dedicated or
shared resources (physical or virtual) to the tenants.
For example, the service provider can create and configure firewall as follows according
to the requirement of the tenant:
• Economy - Allocates a shared SRX or vSRX for a group of tenants
• Silver - Allocates a dedicated SRX or vSRX per tenant with default specifications
• Gold - Allocates a high-end SRX or vSRX
Copyright © 2017, Juniper Networks, Inc.40
Juniper Networks OpenStack Neutron Plug-in
Figure 8: Firewall Allocation
g043
586
ge-0/0/30ge-0/0/10
ge-0/0/20
ge-0/0/2 ge-0/0/1
ge-0/0/2
Switch 1QFX5100
QFX5100Aggregation
Switch
SRX/vSRX Pool
ge-0/0/30ge-0/0/10
ge-0/0/1
ge-0/0/45
Switch 2QFX5100
Hypervisor 1 Hypervisor 3Hypervisor 2
ge-0/0/20
Hypervisor 4 Hypervisor 6Hypervisor 5
vSRX4Tenant 4
SRX1Default
SRX2Tenant 1
SRX3Tenant 2Tenant 3
As seen in the Figure 8 on page 41, you can dedicate SRX/vSRX to a tenant or a group
of tenants. This procedure is transparent to the tenant and is done using the supplied
CLI tools along with Juniper OpenStack Neutron plug-in.
To allocate a dedicated SRX cluster to a tenant:
1. Allocate the master SRX to the tenant:
admin@controller:~$ jnpr_allocate_device add -t tenant-project-id -dhostname-or-device-ip-address
admin@controller:~$ jnpr_allocate_device add –t e0d6c7d2e25943c1b4460a4f471c033f–d 10.20.30.40+----------------------------------+---------------+| Tenant ID | Device IP |+----------------------------------+---------------+| e0d6c7d2e25943c1b4460a4f471c033f | 10.20.30.40 |+----------------------------------+---------------+
2. Define the VRRP cluster and assign a name:
admin@controller:~$ jnpr_vrrp_pool add -d hostname-or-device-ip-address -ppool-name-to-be-assignedadmin@controller:~$ jnpr_vrrp_pool add -d hostname-or-device-ip-address -ppool-name-to-be-assigned
admin@controller:~$ jnpr_vrrp_pool add –d 10.20.30.40 –p tenant1_pool1+----------------------------------+-----------------+| Device ID | VRRP POOL NAME|+----------------------------------+-----------------+| 10.20.30.40 | tenant1_pool1 |+----------------------------------+-----------------+
admin@controller:~$ jnpr_vrrp_pool add –d 10.20.30.41 –p tenant1_pool1+----------------------------------+-----------------+| Device ID | VRRP POOL NAME|+----------------------------------+-----------------+
41Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
| 10.20.30.40 | tenant1_pool1 |+----------------------------------+------------------+
admin@controller:~$ jnpr_vrrp_pool list+---------------+----------------+| Device ID | VRRP POOL NAME |+---------------+----------------+| 10.20.30.40 | tenant1_pool1 || 10.20.30.41 | tenant1_pool1 |+---------------+----------------+
Configuring High Availability and Virtual Router Redundancy Protocol
Supported Platforms EX Series, QFabric System, QFX Series, SRX Series, vSRX
FwaaS plug-in supports High Availability with Virtual Router Redundancy Protocol (HA
with VRRP). In order to use this functionality, you must create a VRRP pool and assign
one of the devices in the pool to a tenant by using the jnpr_allocate_device command.
To create a VRRP pool and to assign a device to a tenant:
1. Create a VRRP pool:
admin@controller:~$ jnpr_vrrp_pool add –d 10.20.30.40 –p tenant1_pool1+----------------------------------+-----------------+| Device ID | VRRP POOL NAME|+----------------------------------+-----------------+| 10.20.30.40 | tenant1_pool1 |+----------------------------------+-----------------+
admin@controller:~$ jnpr_vrrp_pool add –d 10.20.30.41 –p tenant1_pool1+----------------------------------+-----------------+| Device ID | VRRP POOL NAME|+----------------------------------+-----------------+| 10.20.30.41 | tenant1_pool1 |+----------------------------------+-----------------+
admin@controller:~$ jnpr_vrrp_pool list+---------------+----------------+| Device ID | VRRP POOL NAME |+---------------+----------------+| 10.20.30.40 | tenant1_pool1 || 10.20.30.41 | tenant1_pool1 |+---------------+----------------+
2. Allocate the master SRX of the VRRP pool to the tenant:
admin@controller:~$ jnpr_allocate_device add –t tenant-project-id -dhostname-or-device-ip-address
admin@controller:~$ jnpr_allocate_device add –t e0d6c7d2e25943c1b4460a4f471c033f–d 10.20.30.40+----------------------------------+---------------+| Tenant ID | Device IP |+----------------------------------+---------------+| e0d6c7d2e25943c1b4460a4f471c033f | 10.20.30.40 |+----------------------------------+---------------+
Copyright © 2017, Juniper Networks, Inc.42
Juniper Networks OpenStack Neutron Plug-in
Configuring the VPN-as-a-Service (VPNaaS) Plug-in
Supported Platforms EX Series, QFabric System, QFX Series
Juniper Networks VPN-as-a-Service (VPNaaS) builds on top of the Juniper Networks L3
and FWaaS plug-ins. Use the VPNaaS plug-in to configure site-to-site VPN on SRX and
vSRX devices.
Supported Devices
SRX and vSRX series devices
Plug-in Configuration
Before you proceed, ensure that the following pre-requisites are met:
• Topology is set up:
• Devices are added to jnpr_devices table.
• Compute NIC – Physical network aliasmapping is added to jnpr_nic_mapping table.
• Compute–Switch connectivity is captured in jnpr_switchport_mapping table,which
is needed for L2 VLAN orchestration.
• L2 plug-in is setup. This is optional if you are using a 3rd party ML2 plug-in.
• L3 plug-in is setup to use the SRX/vSRX as the router.
• FwaaS plug-in is setup (optional).
To configure the OpenStack Neutron to use Juniper Networks VPNaaS service plug-in:
1. Update the Neutron configuration file /etc/neutron/neutron.conf file and append
service_plug-ins with the following:
service_plug-ins = neutron.services.juniper_l3_router.dmi.l3.JuniperL3Plugin, neutron_fwaas.services.juniper_fwaas.dmi.fwaas.JuniperFwaaS, neutron_vpnaas.services.vpn.juniper.vpnaas.JuniperVPNaaS
NOTE: The following steps are optional if the FWaaS plug-in is alreadyconfigured.
2. Add a firewall to the topology:
admin@controller:~$ jnpr_device add -d device-name -c firewall -u root-user-p root-password
3. Define the downlink trunk port on the SRX device on which the RVIs are created by
the plug-in.
Update theplug-indatabasewith theporton theSRXdevice towhich theAggregation
Switch is connected:
43Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
admin@controller:~$ jnpr_device_port -d srx-device-name-or-switch-ip-address-p port-on-the-srx -t port-type
For example:
admin@controller:~$ jnpr_device_port add -d srx1 –p ge-0/0/1 –t Downlink
4. Allocate the firewall to a tenant or as a default for all tenants:
admin@controller:~$ jnpr_allocate_device add -t project-id -dsrx-or-vsrx-ip-address
To allocate the firewall as a default to all the tenants who do not have a firewall
allocated to them:
admin@controller:~$ jnpr_allocate_device add -t default -dsrx-or-vsrx-ip-address
5. After completing the FWaaS plug-in configuration, restart the following:
• Neutron-Server
• Ubuntu - service neutron-server restart
• CentOS - systemctl restart neutron-server
• Apache (restarts Horizon)
• Ubuntu - service apache2 restart
• CentOS - systemctl restart httpd
6. From the Horizon GUI, create a VPN IPSEC site connection along with its associated
IKE, IPSEC and VPNService components. You can view the corresponding IKE, IPSEC,
IKE GW configurations that are activated on the SRX/vSRX.
Configuring OpenStack Extension for Physical Topology
Supported Platforms EX Series, QFabric System, QFX Series
Physical Topology extension provides a dashboard for the OpenStack administrator to
manage physical network connections. For example, Host NIC to Switch Port mapping.
The Physical topology API exposes these physical network connections.
Juniper neutron plug-in currently manages topology information by using the
jnpr_switchport_mapping command.
admin@controller:~$ jnpr_switchport_mapping list+---------------+------+---------------+-----------+-----------+| Host | Nic | Switch | Port | Aggregate |+---------------+------+---------------+-----------+-----------+| 10.107.65.101 | eth1 | 10.107.52.136 | ge/0/0/10 | || 10.107.65.102 | eth1 | 10.107.52.136 | ge/0/0/20 | || 10.107.65.105 | eth2 | 10.107.52.137 | ge/0/0/20 | || 10.107.65.106 | eth1 | 10.107.52.137 | ge/0/0/30 | || 10.108.10.100 | eth1 | 10.107.52.137 | ge/0/0/5 | |+---------------+------+---------------+-----------+-----------+
Copyright © 2017, Juniper Networks, Inc.44
Juniper Networks OpenStack Neutron Plug-in
Plug-in Configuration
To configure the physical topology extension:
1. On OpenStack Controller, update the ML2 configuration file
etc/neutron/plugins/ml2/ml2_conf.ini as follows:
NOTE: If the file is updated to enable EVPN driver for VXLAN, skip thisstep.
[ml2]type_drivers = vlantenant_network_types = vxlanmechanism_drivers = openvswitch,juniper
2. Update the Neutron configuration file /etc/neutron/neutron.conf as follows:
core_plugin = neutron.plugins.ml2.plugin_pt_ext.Ml2PluginPtExt
3. Enable Physical topology dashboard as follows:
• CentOS
cp /usr/lib/python2.7/site-packages/juniper_horizon_physical-topology/
openstack_dasboard/enabled/_2102_admin_topology_panel.py
/usr/share/openstack-dashboard/openstack_dashboard/enabled/
• Ubuntu
cp /usr/lib/python2.7/dist-packages/juniper_horizon_physical-topology/
openstack_dasboard/enabled/_2102_admin_topology_panel.py
/usr/share/openstack-dashboard/openstack_dashboard/enabled/
4. Restart Neutron and Horizon services:
• Neutron-Server
• Ubuntu - service neutron-server restart
• CentOS - systemctl restart neutron-server
• Apache (restarts Horizon)
• Ubuntu - service apache2 restart
• CentOS - systemctl restart httpd
5. After the installation is complete, the physical topology dashboard is available at
Admin > System > Physical Networks.
From the Physical Networks dashboard, you can perform the following tasks:.
Figure 9 on page 46 shows how to view physical topologies.
45Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
Figure 9: View Physical topologies
Figure 10 on page 46 shows how to add a physical topology.
Figure 10: Add Physical Topology
Figure 11 on page 47 shows how to add a physical topology from LLD.
Copyright © 2017, Juniper Networks, Inc.46
Juniper Networks OpenStack Neutron Plug-in
Figure 11: Add Topology from LLDP
Figure 12 on page 47 shows how to edit a physical topology.
Figure 12: Edit Physical Topology
Figure 13 on page 48 shows how to delete a physical topology.
47Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
Figure 13: Delete a Physical Topology
Figure 14 on page 48 shows how to delete multiple physical topologies.
Figure 14: Delete Multiple Physical Topologies
Configuring the Static Route Extension
Supported Platforms EX Series, QFabric System, QFX Series
Static route extension providesHorizondashboard andNeutronRESTAPI for configuring
static routes with preferences. The Horizon dashboard is available at the following
location: Project > Network > Routers > Static Routes.
Supported Devices
Copyright © 2017, Juniper Networks, Inc.48
Juniper Networks OpenStack Neutron Plug-in
EX, QFX, SRX, and vSRX series devices
Configuring the Static Route Extension
To configure the static route extension:
1. Update the Neutron configuration file /etc/neutron/neutron.conf as follows:
service plugins = neutron.services.juniper_l3_router.dmi.l3.JuniperL3Plugin
2. Restart Neutron and Horizon services.
Neutron-Server
a. Ubuntu - service neutron-server restart
b. Centos - systemctl restart neutron-server
Apache (restarts Horizon)
a. Ubuntu - service apache2 restart
b. CentOS - systemctl restart httpd
3. FromtheOpenStackDashboard, youcanperformthe following tasksusingOpenStack
tenant:
Figure 15 on page 49 shows how to view static routes.
Figure 15: View Static routes
Figure 16 on page 50 shows how to add a static route.
49Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
Figure 16: Add a Static route
Figure 17 on page 50 shows how to delete a static route.
Figure 17: Delete Static Routes
Configuring EVPNMulti-homing
Supported Platforms EX Series, QFabric System, QFX Series
Toachievenetwork redundancyand loadbalancing,OpenStacknodes canbeconnected
to more than one leaf switches that are capable of VXLAN-EVPN network. Juniper ML2
VXLAN-EVPN driver plug-in provisions the multi-homed peer devices with an identical
Ethernet Segment Identification (ESI) number and identical VLAN, VNI encapsulation
details. This enables EVPNmulti-homing functionality on the device.
OpenStack nodes can utilize all themulti-homing feature enabled uplinks to send traffic.
This provides load balancing and redundancy in case of any failures. The uplink interface
must be an aggregated interface.
Copyright © 2017, Juniper Networks, Inc.50
Juniper Networks OpenStack Neutron Plug-in
Figure 18: EVPNMulti-homing
g043
587
Spine
Leaf
br-eth1
ComputeComputeCompute
If more than one device connection is added for a particular OpenStack node using the
jnpr_switchport_mapping command, the node is considered as multi-homing enabled.
The interfacemustbeanAggregatedEthernet interface.This triggersanESI IDgeneration
and configures it to the aggregated switch interfaces.
Supported Devices and JUNOSVersion
EX, QFX, SRX, and vSRX series devices
Configuration of ML2 VXLAN EVPN plug-in is a prerequisite for this plug-in. For more
details on configuring the ML2 VXLAN EVPN plug-in, refer to Configuring ML2 VXLAN
Plug-in with EVPN.
Additionally, the jnpr_switchport_mapping command creates the required physical
topology name that is derived from the ESI ID and the bridge mapping details based on
the topology inputs.
To update the configuration details:
1. Update the configuration details in the Open vSwitch Agent configuration file of the
OpenStack nodes to which the switch is connected:
admin@controller:~$ jnpr_switchport_mapping add -H 10.206.44.116 -n eth3 -s10.206.44.50 -p ae2+---------------+------+--------------+------+-----------+| Host | Nic | Switch | Port | Aggregate |+---------------+------+--------------+------+-----------+| 10.206.44.116 | eth3 | 10.206.44.50 | ae2 | |+---------------+------+--------------+------+-----------+=============================================================If you are using evpn driver, please update ovs l2 agentconfig file /etc/neutron/plug-ins/ml2/openvswitch_agent.ini onnode 10.206.44.116 with bridge_mappings = 00000000010206044116:br-eth1
2. Update the physical_topology namewith VLAN ranges in the neutron ML2 plug-in
configuration file ml2_conf.ini as follows:
[ml2]type_drivers = flat,vlan,vxlan,vxlan_evpn
51Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
tenant_network_types = vxlan_evpnmechanism_drivers = jnpr_vxlan_evpn,openvswitch#extension_drivers = port_security
[ml2_type_vlan]network_vlan_ranges=10.206.44.50:10:1000,00000000010206044116:10:1000,10.206.44.56:10:1000
[ml2_type_vxlan]vni_ranges = 10:5000
3. To verify whether the EVPNmulti-homing plug-in is functioning properly, restart your
neutron server, create networks, and VMs associated to the networks. The
multi-homing enabled VMs are reachable when a redundant link is disabled.
Configuring EVPN BMS
Supported Platforms EX Series, QFabric System, QFX Series
Juniper VXLAN-EVPN plug-in supports integration of Bare Metal Server (BMS) into
VXLAN-EVPNnetwork.BMScommunicateswith theOpenStackVMswhen it is connected
through an OpenStack network.
Juniper plug-in supports integration of BMS into the OpenStack network. This provides
accessibility to traditional physical devices fromtheOpenStackVM.Basedon theplug-in
configuration, BMS can be integrated into VLAN, VXLAN-EVPN network.
Figure 19: EVPN BMSSupport
g043
588
L3 NETWORK
Leaf
BMS
OpenStackCompute
OpenStackCompute
Supported Devices and JUNOSVersion
Juniper EVPN BMS functionality is supported on QFX5100 leaf device with version
14.1X53-D40.
Plugin Configuration
Copyright © 2017, Juniper Networks, Inc.52
Juniper Networks OpenStack Neutron Plug-in
To integrate BMS into the OpenStack network:
NOTE: Configuration of ML2 VXLAN EVPN plug-in is a prerequisite for thisplug-in. For more information on how to configure ML2 VXLAN EVPN, referConfiguring ML2 VXLAN Plug-in with EVPN.
NOTE: The BMS must be connected to the Leaf device.
1. From the OpenStack Horizon GUI, select the OpenStack network to be linked with
the BMS and provide the switch interface details. This provisions the necessary VLAN
configurationon thedevice interface toestablishconnection to theOpenStacknetwork
and BMS.
Byproviding theBMSMACaddress, an IPaddress isallocatedby theOpenStackDHCP
server. ByenablingDHCPclientonBMS interface, allocated IPaddresscanbeobtained
fromOpenStack.
Figure 20: EVPN BMSPlugin Configuration
2. Type the following commands to enable BMS GUI on the Horizon dashboard:
• On Ubuntu:
usr/lib/python2.7/dist-packages/juniper_bms/openstack_dashboard/enabled/_50_juniper.py
/usr/share/openstack-dashboard/openstack_dashboard/enabled/
sudo cp /usr/lib/python2.7/dist-packages/juniper_bms/openstack_dashboard/enabled/_50_juniper.py
53Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
/usr/share/openstack-dashboard/openstack_dashboard/enabled/
• On Centos:
cp /usr/lib/python2.7/site-packages/juniper_bms/openstack_dashboard/enabled/_50_juniper.py
/usr/share/openstack-dashboard/openstack_dashboard/enabled/
sudo cp /usr/lib/python2.7/site-packages/juniper_bms/openstack_dashboard/enabled/_50_juniper.py
/usr/share/openstack-dashboard/openstack_dashboard/enabled/
3. Restart the Horizon dashboard.
On the Horizon dashboard, a new Juniper->L2 Gateway tab is displayed as shown in
the Figure 21 on page 54 when the user logs in as admin.
Figure 21: EVPN BMSPlugin Configuration
Users can create an L2 Gateway by specifying the switch IP address, wherein the
switch must be present in the topology, the interface connecting the BMS, the
OpenStacknetwork tobeconnected to, andoptionally theBMSMACaddress (required
for dhcp to allocate IP address).
4. Verify whether the BMS is able to communicate with the the OpenStack guest VMs.
Configuring Network Address Translation
Supported Platforms EX Series, QFabric System, QFX Series
Copyright © 2017, Juniper Networks, Inc.54
Juniper Networks OpenStack Neutron Plug-in
Network Address Translation (NAT) is a process for modifying the source or destination
addresses in the headers of an IP packet while the packet is in transit. In general, the
sender and receiver applications are not aware that the IP packets are manipulated.
In OpenStack, external network provides Internet access for instances. By default, this
network only allows Internet access from instances using Source Network Address
Translation (SNAT). In SNAT, the NAT router modifies the IP address of the sender in IP
packets.SNAT iscommonlyused toenablehostswithprivateaddresses tocommunicate
with servers on the public Internet.
OpenStack enables Internet access to an instance by using floating IPs. Floating IPs are
not allocated to instances by default. Cloud users should get the floating IPs from the
pool configuredby theOpenStackadministrator and thenattach themto their instances.
Floating IP is implementedbyDestinationNetworkAddressTranslation (DNAT). InDNAT,
the NAT router modifies the IP address of the destination in IP headers.
SNAT – Internet Access fromVM
Figure 22 on page 55 describes an OpenStack instance accessing Internet.
Figure 22: OpenStack Accessing the Internet
g043
589NETWORK
Juniper Routerconfigured by
Juniper NeutronPlugin
IP PacketSource IP 10.0.0.3
Destination IP 1.1.1.100
IP PacketSource IP 3.3.3.1
Destination IP 1.1.1.100
VM
To enable Internet access from VMs, the network to which VM is connected should be
connected to a router. This router must have its gateway set to the external network
created by the administrator. Juniper Networks Neutron plug-in configures SNAT on the
router to modify source address of all the VMs to the address of the interface that is
connected to external network.
DNAT – Internet Access to VM
Figure 23 on page 56 describes accessing OpenStack instance from the Internet.
55Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
Figure 23: Accessing OpenStack from the Internet
g043
590NETWORK
Juniper Routerconfigured by
Juniper NeutronPlugin
IP PacketSource IP 1.1.1.100
Destination IP 10.10.10.3
VM IP AddressesFixed IP 10.10.10.3
Floating IP 3.3.3.8
IP PacketSource IP 1.1.1.100
Destination IP 3.3.3.8
VM
To enable Internet access to VMs, a floating IP is allocated to the VM from a pool of IP
addresses allocated to the tenant by administrator. The floating IP should be a routable
IP. JuniperNetworksNeutronplug-in configures the external facing interface of the router
to proxy ARP for this IP address and DNATs for floating IP of the VM.
Plugin Configuration
To configure the plug-in:
1. Update the Neutron configuration file /etc/neutron/neutron.conf as follows:
service plug-ins = neutron.services.juniper_l3_router.dmi.l3.JuniperL3Plugin
2. Restart the Neutron service as follows:
service neutron-server restart
Configuring an External Network Access
To configure an external network access:
1. Create a network.
2. Launch an instance on the network created.
3. Create a router.
4. Add the new network to the router.
5. Set gateway of the router to the external network.
6. Ping from VM to any IP in the external network.
Configuring Access to VM from an External Network
Copyright © 2017, Juniper Networks, Inc.56
Juniper Networks OpenStack Neutron Plug-in
To configure access to a VM from an external network:
1. Associatea floating IP fromthe floating IPpool of theexternal network to the instance
that is created.
2. Configure security rules in security group to allow traffic from the external network.
For example, ICMP ALLOWALL for both ingress and egress traffic.
3. You can access the VM through the floating IP.
57Copyright © 2017, Juniper Networks, Inc.
Chapter 2: Installing and Configuring the OpenStack Neurton Plug-in
Copyright © 2017, Juniper Networks, Inc.58
Juniper Networks OpenStack Neutron Plug-in
CHAPTER 3
References
• Troubleshooting on page 59
• Referenced Documents on page 59
• Glossary on page 59
Troubleshooting
Supported Platforms EX Series, QFabric System, QFX Series
The Juniper plug-in logs are available on Network node under the folder
/var/log/neutron/juniper*.
Referenced Documents
Supported Platforms EX Series, QFabric System, QFX Series
Table 5: References
RevisionTitle / Author / LinkBookmark
Juniper Networks Plug-In for OpenStack NeutronRelease note
Glossary
Supported Platforms EX Series, QFabric System, QFX Series
Table 6: Terminologies and Acronyms
DefinitionTerm
Neutron Modular plugin 2ML2
Source Network Address TranslationSNAT
Destination Network Address TranslationDNAT
VPN-as-a-ServiceVPNaaS
Ethernet Segment Identification numberESI
59Copyright © 2017, Juniper Networks, Inc.
Table 6: Terminologies and Acronyms (continued)
DefinitionTerm
Bare Metal ServerBMS
Top of RackTOR
Copyright © 2017, Juniper Networks, Inc.60
Juniper Networks OpenStack Neutron Plug-in