96
i IMC Orchestrator Solution Underlay Network Configuration Guide The information in this document is subject to change without notice. © Copyright 2021 Hewlett Packard Enterprise Development LP

IMC Orchestrator Solution

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: IMC Orchestrator Solution

i

IMC Orchestrator Solution Underlay Network Configuration Guide

The information in this document is subject to change without notice. © Copyright 2021 Hewlett Packard Enterprise Development LP

Page 2: IMC Orchestrator Solution

ii

Contents

Introduction ···································································································· 1

Configuration principles ················································································· 2

Manual underlay network deployment ··························································· 3

Network configuration ········································································································································ 3 Configure Spine 1 ·············································································································································· 3 Configure Spine 2 ·············································································································································· 6 Configure Leaf 1················································································································································· 8 Configure Border 1 ··········································································································································· 11 Configure hardware resource settings ············································································································· 14

Configure the 12900E switch with type X modules ·················································································· 14 Configure the 5944 switch ························································································································ 15 Configure the 5945 switch ························································································································ 16

Configure the underlay routing protocol ··········································································································· 17 Configure OSPF ······································································································································· 17 Configure IS-IS ········································································································································· 17 Configure BGP ········································································································································· 18

Configure the NTP server ································································································································ 19 Automatic underlay network deployment ····················································· 21

Network configuration ······································································································································ 21 Configure the management switch ··········································································································· 22 Configure spine and leaf devices ············································································································· 24

Basic IMC Orchestrator configuration ·············································································································· 24 Install and configure the unified platform ································································································· 24 Add the DHCP server ······························································································································· 24 Set the TFTP service ······························································································································· 26

Configure automation environment ·················································································································· 26 Create IP address pools ··························································································································· 26 Create fabric and automatically incorporation configuration ···································································· 28 Configure device list ································································································································· 31

Device automated deployment························································································································· 32 Automated deployment of spine and leaf ································································································· 32 Border automated deployment ················································································································· 33 Verify the configuration after automation deployment ·············································································· 34

Device IRF and aggregation ···························································································································· 35 Replace a device·············································································································································· 37

Manual configuration of the DRNI network with a VXLAN tunnel IPL ··········· 40

Network diagram and interface description ······································································································ 40 Configure EVPN ··············································································································································· 41

Configure spine devices ··························································································································· 41 Configure Leaf 1-1 ··································································································································· 42 Configure Leaf 1-2 ··································································································································· 43 Configure Leaf 2-1 ··································································································································· 44 Configure Leaf 2-2 ··································································································································· 45 Verify the configuration ···························································································································· 46

Configure DRNI ················································································································································ 47 Configure Leaf 1-1 ··································································································································· 47 Configure Leaf 1-2 ··································································································································· 49 Configure Leaf 2-1 ··································································································································· 51 Configure Leaf 2-2 ··································································································································· 52 Verify the configuration ···························································································································· 54

Restrictions and guidelines ······························································································································ 54

Page 3: IMC Orchestrator Solution

iii

Manual deployment of DRNI network with an Ethernet aggregate link IPL ·· 56

Network diagram and interface description ······································································································ 56 Configure EVPN ··············································································································································· 57 Configure DRNI ················································································································································ 57

Configure Leaf 1-1 ··································································································································· 57 Configure Leaf 1-2 ··································································································································· 60 Configure Leaf 2-1 ··································································································································· 62 Configure Leaf 2-2 ··································································································································· 64 Verify the configuration ···························································································································· 67

Restrictions and guidelines ······························································································································ 67 Automated deployment of DRNI network with an Ethernet aggregate link IPL ···················································································································· 68

Network diagram and interface description ······································································································ 68 Address pool and VLAN pool planning ············································································································ 68 Basic configuration examples ·························································································································· 69

Install vDHCP components ······················································································································ 69 Create a vDHCP server ··························································································································· 71 Enable TFTP and Syslog services ··········································································································· 71 Create a fabric ·········································································································································· 72 Create VLAN pools ·································································································································· 72 Create VXLAN pools ································································································································ 73 Create address pools ······························································································································· 73 Create border device groups ···················································································································· 73 Create a VDS ··········································································································································· 74 Create a device control protocol template································································································ 74

Access device automation deployment, supporting DRNI scenario ································································ 75 Create an automation template ················································································································ 75 Create a device List ································································································································· 77 Device startup without a startup configuration file···················································································· 78 Automated deployment succeeded ·········································································································· 78 Automatic creation of DR business ·········································································································· 78 Device configuration issuance ················································································································· 80

Border device automation deployment, supporting DRNI scenario ································································· 83 Create an automation template ················································································································ 83 Create a device List ································································································································· 83 Device startup without a startup configuration file···················································································· 84 Automated deployment succeeded ·········································································································· 85 Add to a border device group ··················································································································· 85 Automatic creation of DR business ·········································································································· 86 Device configuration issuance ················································································································· 87

Restrictions ·································································································· 90

Guidelines···································································································· 91

Page 4: IMC Orchestrator Solution

1

Introduction The underlay network between spine and leaf devices can be manually configured, or automatically deployed via controllers. The automatic deployment of underlay network supports automatic DRNI system setup with an Ethernet aggregate link IPL, and manual configuration is required for DRNI system setup with a VXLAN tunnel IPL.

Page 5: IMC Orchestrator Solution

2

Configuration principles The automatic deployment of underlay network supports spine-leaf two-tier network. Configure the underlay automation template on IMC Orchestrator and restart the devices with empty configuration. Acquire the template for automatic deployment. Underlay automatic deployment includes configuration of management address, VTEP address, OSPF/IS-IS/BGP for underlay routing, BGP for overlay routing, interfaces that interconnect spine and leaf devices, and IRF configuration and aggregate interface configuration. The automatic deployment of the underlay network supports automatic DRNI system setup with an Ethernet aggregate link IPL, and manual configuration is required for DRNI system setup with a VXLAN tunnel IPL.

Page 6: IMC Orchestrator Solution

3

Manual underlay network deployment Network configuration

Figure 1 Network diagram

Two spine devices are not stacked. Two leaf switches are stacked. Two border switches are stacked. Leaf 1 connects to Spine 1 and Spine 2 with the uplinks, and Border 1 connects to Spine 1 and Spine 2 with the uplinks. Leaf 1 connects to servers with the downlinks through the aggregate interface. Border 1 connects to external network devices. Spine, leaf and border switches are connected to the network switch through the management interface.

The management addresses and VTEP addresses of the switches are as follows:

Device Management address VTEP address Spine 1 192.168.11.181/24 192.168.101.250/32

Spine 2 192.168.11.182/24 192.168.101.253/32

Leaf 1 192.168.11.183/24 192.168.101.248/32

Border 1 192.168.11.188/24 192.168.101.252/32

Management switch 192.168.11.240/24

Configure Spine 1 #

sysname spine1

Page 7: IMC Orchestrator Solution

4

#

ip vpn-instance mgmt

#

router id 192.168.11.181

# Configure the underlay routing protocol, using OSPF as an example. ospf 1

non-stop-routing

area 0.0.0.0

#

lldp global enable

#

stp global enable

#

l2vpn enable

#

interface LoopBack0

ip address 192.168.101.250 255.255.255.255

#

Use Layer 3 interfaces to interconnect spine and leaf switches. If VLAN interfaces are used, execute the vxlan ip-forwarding vxlan tagged command in system view or configure the interfaces connected to other switches to tag packets with the PVID on 12900E switches. #

interface FortyGigE4/0/1

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface FortyGigE4/0/3

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface FortyGigE4/0/4

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface FortyGigE4/0/5

port link-mode route

Page 8: IMC Orchestrator Solution

5

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface M-GigabitEthernet0/0/0

ip binding vpn-instance mgmt

ip address 192.168.11.181 255.255.255.0

#

bgp 100

non-stop-routing

group evpn internal

peer evpn connect-interface LoopBack0

peer 192.168.101.248 group evpn

peer 192.168.101.252 group evpn

#

address-family l2vpn evpn

undo policy vpn-target

peer evpn enable

peer evpn reflect-client

#

line vty 0 63

authentication-mode scheme

user-role network-admin

user-role network-operator

#

ip route-static vpn-instance mgmt 0.0.0.0 0 192.168.11.240

#

vtep enable

#

ssh server enable

#

local-user admin class manage

password simple admin

service-type telnet http https ssh

authorization-attribute user-role network-admin

authorization-attribute user-role network-operator

#

netconf soap http enable

netconf soap https enable

netconf ssh server enable

restful https enable

#

ovsdb server ptcp port 6632

ovsdb server enable

#

Page 9: IMC Orchestrator Solution

6

Configure Spine 2 #

sysname spine2

#

ip vpn-instance mgmt

#

router id 192.168.11.182

# Configure the underlay routing protocol, using OSPF as an example. ospf 1

non-stop-routing

area 0.0.0.0

#

lldp global enable

#

stp global enable

#

l2vpn enable

#

interface LoopBack0

ip address 192.168.101.253 255.255.255.255

#

Use Layer 3 interfaces to interconnect spine and leaf switches. If VLAN interfaces are used, execute the vxlan ip-forwarding vxlan tagged command in system view or configure the interfaces connected to other switches to tag packets with the PVID on 12900E switches. #

interface FortyGigE4/0/6

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface FortyGigE4/0/7

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface FortyGigE4/0/8

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp management-address arp-learning

Page 10: IMC Orchestrator Solution

7

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface FortyGigE4/0/9

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface M-GigabitEthernet0/0/0

ip binding vpn-instance mgmt

ip address 192.168.11.182 255.255.255.0

#

bgp 100

non-stop-routing

group evpn internal

peer evpn connect-interface LoopBack0

peer 192.168.101.248 group evpn

peer 192.168.101.252 group evpn

#

address-family l2vpn evpn

undo policy vpn-target

peer evpn enable

peer evpn reflect-client

#

line vty 0 63

authentication-mode scheme

user-role network-admin

user-role network-operator

#

ip route-static vpn-instance mgmt 0.0.0.0 0 192.168.11.240

#

vtep enable

#

ssh server enable

#

local-user admin class manage

password simple admin

service-type telnet http https ssh

authorization-attribute user-role network-admin

authorization-attribute user-role network-operator

#

netconf soap http enable

netconf soap https enable

netconf ssh server enable

restful https enable

#

Page 11: IMC Orchestrator Solution

8

ovsdb server ptcp port 6632

ovsdb server enable

#

Configure Leaf 1 #

sysname leaf1

#

ip vpn-instance mgmt

#

irf mac-address persistent always

irf auto-update enable

undo irf link-delay

irf member 1 priority 2

irf member 5 priority 1

irf mac-address d461-fe31-c12e

#

vxlan tunnel mac-learning disable

#

router id 192.168.11.183

# Configure the underlay routing protocol, using OSPF as an example. ospf 1

non-stop-routing

area 0.0.0.0

#

lldp compliance cdp

lldp global enable

# Configure hardware resource settings. hardware-resource switch-mode 4

hardware-resource routing-mode ipv6-128

hardware-resource vxlan l3gw40k

#

vlan 2 to 4094

#

irf-port 1/1

port group interface HundredGigE1/0/53

#

irf-port 5/2

port group interface HundredGigE5/0/53

#

stp global enable

#

l2vpn enable

vxlan tunnel arp-learning disable

#

interface Bridge-Aggregation2048

port link-type trunk

Page 12: IMC Orchestrator Solution

9

undo port trunk permit vlan 1

link-aggregation mode dynamic

vtep access port

#

interface LoopBack0

ip address 192.168.101.248 255.255.255.255

#

interface Vlan-interface4093

mad bfd enable

mad ip address 192.168.2.1 255.255.255.0 member 1

mad ip address 192.168.2.2 255.255.255.0 member 5

#

Use Layer 3 interfaces to interconnect spine and leaf switches. If VLAN interfaces are used, execute the vxlan ip-forwarding vxlan tagged command in system view or configure the interfaces connected to other switches to tag packets with the PVID on 12900E switches. #

interface HundredGigE1/0/50

port link-mode route

flow-interval 5

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp compliance admin-status cdp txrx

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface HundredGigE1/0/54

port link-mode route

flow-interval 5

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp compliance admin-status cdp txrx

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface HundredGigE5/0/50

port link-mode route

flow-interval 5

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp compliance admin-status cdp txrx

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface HundredGigE5/0/54

port link-mode route

Page 13: IMC Orchestrator Solution

10

flow-interval 5

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp compliance admin-status cdp txrx

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface M-GigabitEthernet0/0/0

ip binding vpn-instance mgmt

ip address 192.168.11.183 255.255.255.0

undo lldp enable

dhcp client identifier hex 01d461fe31c12e

#

interface Ten-GigabitEthernet1/0/2

port link-mode bridge

port access vlan 4093

undo stp enable

#

interface Ten-GigabitEthernet1/0/47

port link-mode bridge

port link-type trunk

undo port trunk permit vlan 1

port link-aggregation group 2048

#

interface Ten-GigabitEthernet5/0/2

port link-mode bridge

port access vlan 4093

undo stp enable

#

interface Ten-GigabitEthernet5/0/47

port link-mode bridge

port link-type trunk

undo port trunk permit vlan 1

port link-aggregation group 2048

#

bgp 100

non-stop-routing

group evpn internal

peer evpn connect-interface LoopBack0

peer 192.168.101.250 group evpn

peer 192.168.101.253 group evpn

#

address-family l2vpn evpn

undo policy vpn-target

peer evpn enable

#

line vty 0 63

Page 14: IMC Orchestrator Solution

11

authentication-mode scheme

user-role network-admin

user-role network-operator

idle-timeout 0 0

#

ip route-static vpn-instance mgmt 0.0.0.0 0 192.168.11.240

#

vtep enable

#

ssh server enable

#

ntp-service enable

ntp-service unicast-server 192.168.11.87 vpn-instance mgmt

#

local-user admin class manage

password simple admin

service-type ftp

service-type telnet http https ssh

authorization-attribute user-role network-admin

authorization-attribute user-role network-operator

#

netconf soap http enable

netconf soap https enable

netconf ssh server enable

restful https enable

#

ovsdb server ptcp port 6632

ovsdb server enable

#

Configure Border 1 #

sysname border1

#

ip vpn-instance mgmt

#

telnet server enable

#

irf mac-address persistent always

irf auto-update enable

undo irf link-delay

irf member 1 priority 2

irf member 5 priority 1

irf mac-address 48bd-3d38-ac16

#

vxlan tunnel mac-learning disable

#

Page 15: IMC Orchestrator Solution

12

router id 192.168.11.188

# Configure the underlay routing protocol, using OSPF as an example. ospf 1

non-stop-routing

area 0.0.0.0

#

lldp compliance cdp

lldp global enable

#

system-working-mode standard

# Configure hardware resource settings. hardware-resource switch-mode 4

hardware-resource routing-mode ipv6-128

hardware-resource vxlan border40k

#

vlan 4093

#

irf-port 1/1

port group interface FortyGigE1/0/54

#

irf-port 5/2

port group interface FortyGigE5/0/54

#

stp global enable

#

l2vpn enable

vxlan tunnel arp-learning disable

#

interface LoopBack0

ip address 192.168.101.252 255.255.255.255

#

interface Vlan-interface4093

mad bfd enable

mad ip address 192.168.2.1 255.255.255.0 member 1

mad ip address 192.168.2.2 255.255.255.0 member 5

#

Use Layer 3 interfaces to interconnect spine and leaf switches. If VLAN interfaces are used, execute the vxlan ip-forwarding vxlan tagged command in system view or configure the interfaces connected to other switches to tag packets with the PVID on 12900E switches. #

interface FortyGigE1/0/49

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp compliance admin-status cdp txrx

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

Page 16: IMC Orchestrator Solution

13

#

interface FortyGigE1/0/50

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp compliance admin-status cdp txrx

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface FortyGigE5/0/49

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp compliance admin-status cdp txrx

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface FortyGigE5/0/50

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp compliance admin-status cdp txrx

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface M-GigabitEthernet0/0/0

ip binding vpn-instance mgmt

ip address 192.168.11.188 255.255.255.0

undo lldp enable

dhcp client identifier hex 0148bd3d38ac16

#

interface Ten-GigabitEthernet1/0/29

port link-mode bridge

port access vlan 4093

undo stp enable

lldp compliance admin-status cdp txrx

#

#

interface Ten-GigabitEthernet5/0/29

port link-mode bridge

port access vlan 4093

undo stp enable

lldp compliance admin-status cdp txrx

#

bgp 100

Page 17: IMC Orchestrator Solution

14

non-stop-routing

group evpn internal

peer evpn connect-interface LoopBack0

peer 192.168.101.250 group evpn

peer 192.168.101.253 group evpn

#

address-family l2vpn evpn

undo policy vpn-target

peer evpn enable

#

line vty 0 63

authentication-mode scheme

user-role network-admin

user-role network-operator

idle-timeout 0 0

#

ip route-static vpn-instance mgmt 0.0.0.0 0 192.168.11.240

#

vtep enable

#

ssh server enable

#

local-user admin class manage

password simple admin

service-type telnet http https ssh

authorization-attribute user-role network-admin

authorization-attribute user-role network-operator

#

#

netconf soap http enable

netconf soap https enable

netconf ssh server enable

restful https enable

#

ovsdb server ptcp port 6632

ovsdb server enable

#

Configure hardware resource settings Configure the 12900E switch with type X modules

Set the hardware resource settings of the 12900E switch with type X modules as follows: <addc-net3-leaf1>dis hardware-resource

Tcam resource(tcam), all supported modes:

NORMAL The normal mode

MAC The mac mode

ROUTING The routing mode

Page 18: IMC Orchestrator Solution

15

ARP The arp mode

DUAL-STACK The dual-stack mode

MIX The mix bridging routing mode

ENHANCE-IPV6 The enhance ipv6 mode

ENHANCE-ARPND The enhance arpnd mode

ACL The acl mode

NAT The nat mode

-----------------------------------------------

Default Current Next

NORMAL NORMAL NORMAL

Routing-mode resource(routing-mode), all supported modes:

ipv6-64 IPv6-64 supported

ipv6-128 IPv6-128 supported

-----------------------------------------------

Default Current Next

ipv6-64 ipv6-64 ipv6-64

VXLAN resource(vxlan), all supported modes:

L2GW The Layer 2 gateway mode

L3GW The Layer 3 gateway mode

-----------------------------------------------

Default Current Next

L3GW L3GW L3GW

Set the hardware resource settings of the 12900E switch with type X modules as follows when it serves as a leaf, border or spine device: hardware-resource tcam normal

hardware-resource routing-mode ipv6-128

hardware-resource vxlan l3gw

#

Configure the 5944 switch Set the hardware resource settings of the 5944 switch as follows: <5944>dis hardware-resource

Switch-mode resource(switch-mode), all supported modes:

NORMAL MAC table:96K, ARP and ND tables:80K, routing table:160K

MAC MAC table:288K, ARP and ND tables:16K, routing table:32K

ROUTING MAC table:32K, ARP and ND tables:16K, routing table:324K

ARP MAC table:32K, ARP and ND tables:272K, routing table:32K

DUAL-STACK MAC table:32K, ARP and ND tables:16K, routing:v4-87K,v6-86K

EM MAC table:32K, ARP and ND tables:16K, routing table:32K

-----------------------------------------------

Default Current Next

NORMAL ROUTING ROUTING

Routing-mode resource(routing-mode), all supported modes:

ipv6-64 ipv6-64 supported

ipv6-128 ipv6-128 supported

Page 19: IMC Orchestrator Solution

16

-----------------------------------------------

Default Current Next

ipv6-64 ipv6-64 ipv6-64

Vxlan resource(vxlan), all supported modes:

l2gw L2 gateway--underlay/overlay 64K/0K

l3gw L3 gateway--underlay/overlay 24K/40K

-----------------------------------------------

Default Current Next

l2gw l3gw l3gw

Set the hardware resource settings of the 5944 switch as follows when it serves as a leaf, border or spine device: hardware-resource switch-mode ROUTING

hardware-resource routing-mode ipv6-128

hardware-resource vxlan l3gw

Configure the 5945 switch <addc-net3-leaf2-1>dis hardware-resource

Switch-mode resource(switch-mode), all supported modes:

NORMAL MAC table:96K, ARP and ND tables:80K, routing table:160K

MAC MAC table:288K, ARP and ND tables:16K, routing table:32K

ROUTING MAC table:32K, ARP and ND tables:16K, routing table:324K

ARP MAC table:32K, ARP and ND tables:272K, routing table:32K

DUAL-STACK MAC table:32K, ARP and ND tables:16K, routing:v4-87K,v6-86K

EM MAC table:32K, ARP and ND tables:16K, routing table:32K

-----------------------------------------------

Default Current Next

NORMAL ROUTING ROUTING

Routing-mode resource(routing-mode), all supported modes:

ipv6-64 ipv6-64 supported

ipv6-128 ipv6-128 supported

-----------------------------------------------

Default Current Next

ipv6-64 ipv6-128 ipv6-128

Vxlan resource(vxlan), all supported modes:

l2gw L2 gateway--underlay/overlay 64K/0K

l3gw L3 gateway--underlay/overlay 24K/40K

-----------------------------------------------

Default Current Next

l2gw l3gw l3gw

Set the hardware resource settings of the 5945 switch as follows when it serves as a leaf, border or spine device: hardware-resource switch-mode ROUTING

hardware-resource routing-mode ipv6-128

hardware-resource vxlan l3gw

Page 20: IMC Orchestrator Solution

17

Configure the underlay routing protocol Configure OSPF

# Configure OSPF in system view. ospf 1

non-stop-routing

area 0.0.0.0

# Enable OSPF on all interfaces connected to spine and leaf devices. interface HundredGigE1/0/50

port link-mode route

flow-interval 5

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp compliance admin-status cdp txrx

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

Configure IS-IS The configuration of IS-IS on spine, leaf and other devices are the same.

# Configure IS-IS in system view. isis 1

non-stop-routing

is-level level-2

is-name user1

network-entity 86.4713.0021.0100.0400.1002.00

#

address-family ipv4 unicast

maximum load-balancing 4

# Configure IS-IS for interfaces that interconnect spine and leaf devices, and that interconnect spine and border devices. interface Ten-GigabitEthernet1/0/1

port link-mode route

ip address unnumbered interface LoopBack0

isis enable 1

isis circuit-level level-2

isis circuit-type p2p

isis authentication-mode md5 simple 123456

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

Page 21: IMC Orchestrator Solution

18

Configure BGP

AS 5001 is configured on all spine devices and AS 5002 is configured on all leaf devices.

Spine 1 has Loopback 0 at 5.1.1.2/32. Its Loopback 1 address is 4.1.1.2/32. The interface connected to Spine 1 and the leaf device uses the Loopback 1 address.

Leaf 1 has Loopback 0 at 5.1.1.3/32. Its Loopback 1 address is 4.1.1.3/32. The interface connected to Leaf 1 and the spine device uses the Loopback 1 address.

Leaf 2 has Loopback 0 at 5.1.1.4/32. Its Loopback 1 address is 4.1.1.4/32. The interface connected to Leaf 2 and the spine device uses the Loopback 1 address.

The devices in the underlay network are accessible through BGP. Neighboring EBGP peers are established between Leaf 1 and Spine 1, as well as between Leaf 2 and Spine 1.

Configure BGP on the spine devices #

interface LoopBack0

ip address 5.1.1.2 255.255.255.255

#

interface LoopBack1

ip address 4.1.1.2 255.255.255.255

# Configure BGP in system view. bgp 5001 instance underlay

non-stop-routing

group leaf external

peer leaf as-number 5002

peer leaf ebgp-max-hop 2

peer 4.1.1.3 group leaf

peer 4.1.1.4 group leaf

#

address-family ipv4 unicast

balance 4

network 5.1.1.2 255.255.255.255

peer leaf enable

# Configure interfaces connected to leaf or border devices.

Page 22: IMC Orchestrator Solution

19

interface Ten-GigabitEthernet1/0/1

port link-mode route

ip address unnumbered interface LoopBack1

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack1

arp route-direct advertise

#

Configure BGP on the leaf devices Only BGP configuration of Leaf 1 is described here, because that of Leaf 2 is similar. interface LoopBack0

ip address 5.1.1.3 255.255.255.255

#

interface LoopBack1

ip address 4.1.1.3 255.255.255.255

# Configure BGP in system view. bgp 5002 instance underlay

non-stop-routing

group spine external

peer spine as-number 5001

peer spine ebgp-max-hop 2

peer 4.1.1.2 group spine

#

address-family ipv4 unicast

balance 4

network 5.1.1.3 255.255.255.255

peer spine enable

peer spine allow-as-loop 2

# Configure the interfaces connected to spine devices. interface Ten-GigabitEthernet1/0/1

port link-mode route

ip address unnumbered interface LoopBack1

lldp compliance admin-status cdp txrx

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack1

undo mac-address static source-check enable

arp route-direct advertise

#

Configure the NTP server The NTP server for the switch must be consistent with that for the controller. If the controller uses a built-in server, the NTP server address on the switch will be the internal virtual IP address of the controller cluster.

Page 23: IMC Orchestrator Solution

20

As shown in the figure above, the internal virtual IP address of the controller cluster is 177.7.7.165 and the NTP configuration on the switch is as follows: ntp-service enable

ntp-service unicast-server 177.7.7.165 vpn-instance mgmt

Page 24: IMC Orchestrator Solution

21

Automatic underlay network deployment Network configuration

During the automatic underlay network configuration, devices will first request the IP address of management interface from the DHCP server. In this solution, vDHCP acts as the DHCP server and vDHCP is deployed on the unified platform as a container. The network can be a Layer 2 network or Layer 3 network, depending on whether the management IP address acquired by the device is in the same subnet as the controller and DHCP server. A Layer 2 network only supports single fabric configuration. since the subnet of the management network can only be bound to one fabric, and different fabrics use management IP addresses of different subnets. For a Layer 3 network, the management IP address of device is in a subnet different from that of the controller and vDHCP. therefore, DHCP relay is required to transfer and process DHCP messages between the devices and DHCP server. DHCP replay configuration is conducted manually. A Layer 3 network supports multi-fabric configuration. The management address of each fabric uses one subnet. In the case of deployment, a Layer 3 network is advised for the convenience of fabric expansion in the future.

Figure 2 Layer 2 network

NOTE: The IMC Orchestrator acts as the SDN controller. The vDHCP is the DHCP server of HPE, allocating management IP addresses to underlay devices. LSW is the management switch, connecting the management interfaces of the SDN controller, vDHCP, spine and leaf devices. For a Layer 2 network, LSW only provides the Layer 2 pass-through service. LSW, which is not incorporated by the controller, can only be configured manually. It forwards messages among controllers, vDHCP, and switches. The connections between spine and leaf devices are set up on service interfaces.

Page 25: IMC Orchestrator Solution

22

Figure 3 Layer 3 network

NOTE: The IMC Orchestrator acts as the SDN controller. The vDHCP is the DHCP server of HPE, providing management IP addresses to underlay devices. LSW is the management switch, connecting the management interfaces of the SDN controller, vDHCP, spine and leaf devices. The LSW switch between SDN controller, vDHCP, spine and leaf devices acts as the DHCP relay, providing Layer 3 forwarding and DHCP relay. The device, which is not incorporated by the controller, can only be configured manually. It forwards messages among controllers, vDHCP, and switches. The connections between spine and leaf devices are set up on service interfaces.

Configure the management switch For a single fabric, the management network can be a Layer 2 network, and the south-bound device is in the same subnet as that of vDHCP and IMC Orchestrator. For a single fabric, the management network can also be a Layer 3 network, and the south-bound device can be not in the same subnet as that of vDHCP and IMC Orchestrator. Fabric VLANs must be allocated on the management switch. The gateway and DHCP relay commands must be configured manually. See the Layer 3 network introduction below.

Page 26: IMC Orchestrator Solution

23

In the case of multi-fabric, the management networks of different fabrics fall into different subnets, and it can only be a Layer 3 network.

In the case of multi-fabric, on the management switch, interfaces connected to devices of different fabrics will access into different VLANs, that is, VLAN separation is required between the interfaces connected to devices of Fabric 1 and that connected to devices of Fabric 2.

Under the corresponding VLAN interface, the physical management network of fabric must be configured with the gateway address, and execute the dhcp select relay and dhcp relay server-address ip-address commands. The ip-address argument represents the IP address of the DHCP server. An example is as follows: #

interface Ten-GigabitEthernet1/0/33 # Connect fabric 1 device interfaces on the management switch port link-mode bridge

port access vlan 2

#

Page 27: IMC Orchestrator Solution

24

interface Ten-GigabitEthernet1/0/26 # Connect fabric 2 device interfaces on the management switch port link-mode bridge

port access vlan 3

#

interface Vlan-interface1 # VLAN of controller ip address 170.10.1.1 255.255.0.0

ip address 192.1.1.1 255.255.255.0 sub

#

interface Vlan-interface2 # VLAN for fabric 1

ip address 172.10.1.1 255.255.255.0 # Fabric 1 physical management network gateway address dhcp select relay

dhcp relay server-address 170.10.0.7 # Relay address is the address of the DHCP sever dhcp relay server-address 170.10.0.6

#

interface Vlan-interface3 # VLAN for fabric 2

ip address 172.11.1.1 255.255.255.0 # Fabric 2 physical management network gateway address dhcp select relay

dhcp relay server-address 170.10.0.7 # Relay address is the address of the DHCP sever dhcp relay server-address 170.10.0.6

#

Configure spine and leaf devices Complete the physical link based on the network diagram. To be specific, connect the console interface and make sure that the login to the console of each device is functional. Connect the management interface to the corresponding VLAN interfaces of the management switch, and connect the service interface between spine and leaf devices, and the IRF link of IRF devices. The underlay configuration of devices is issued by the controller during automatic deployment, and manual configuration is not needed.

Basic IMC Orchestrator configuration Install and configure the unified platform

See the Deployment Guide for Unified Platform to install the unified platform and the vDHCP Server suite.

Log in to the unified platform with the default username/password of admin/Pwd@12345.

Add the DHCP server View the primary/secondary DHCP server addresses

1. Log in to the system, and then navigate to the Deployment > Application page.

Page 28: IMC Orchestrator Solution

25

2. Click the Details icon in the Actions column for the vDHCP service.

Add the DHCP server 1. Navigate to the Automation > Fabrics > Basic Services > DHCP page. 2. Click Add. 3. Enter a name for the DHCP server, select High Available, configure the IPv4 addresses and

NETCONF information, and then click Apply. The IPv4 addresses are the IP addresses of the primary and backup DHCP servers, and both the NETCONF username and password are admin. Click Apply.

After the configuration, verify that the vDHCP server is up and the synchronization has succeeded. If the synchronization fails, perform manual synchronization.

Page 29: IMC Orchestrator Solution

26

Set the TFTP service Navigate to the System > System Maintenance > Controller Info page of IMC Orchestrator,and check the team IP.

Navigate to the Automation > Fabrics > Parameter Settings page of IMC Orchestrator, select Enable for TFTP And Syslog Service and fill the above team IP in the Please specify a service IP address. field.

Configure automation environment Create IP address pools

Navigate to the Automation > Resource Pools > IP Address Pools page of IMC Orchestrator. Click Add to create the IP address pools of various types.

When OSPF or IS-IS is selected for underlay protocol, the address pools for physical management network and physical VTEP network must be configured.

When BGP is selected for underlay protocol, the address pools for physical management network, physical VTEP network and underlay interconnection network must be configured.

If border gateway devices exist, address pools for security internal network, tenant bearer network and virtual management network must be configured.

Plans for address pools:

Table 1 Layer 2 network (applicable for single fabric)

Type Network address (Fabric1) physical management network 170.10.1.100/24

(Fabric1) VTEP network 3.1.1.0/24

Page 30: IMC Orchestrator Solution

27

Type Network address (Fabric1) underlay interconnection network 4.1.1.0/24

Table 2 Layer 3 network (applicable for single and multiple fabrics)

Type Network address (Fabric1) physical management network 172.10.1.0/24

(Fabric1) VTEP network 3.1.1.0/24

(Fabric1) underlay interconnection network 4.1.1.0/24

(Fabric2) physical management network 172.11.1.0/24

(Fabric2) VTEP network 5.1.1.0/24

(Fabric2) underlay interconnection network 6.1.1.0/24

Add a physical management network address pool

Add a physical VTEP network

Add an underlay interconnection network This type of address pool is only required when BGP is selected for underlay protocol. It is not necessary when OSPF and IS-IS are selected for underlay protocol.

Page 31: IMC Orchestrator Solution

28

WARNING! When the address segment is added for a specific IP address pool, the pool will allocate IP addresses from the segment. When several IP address segments are added, no overlap is allowed. On the IP address pool allocation page, the specific IP address pool can be selected as the default one. Please note that each type of address pool can only have one default pool.

Create fabric and automatically incorporation configuration 1. Click Add on the Automation > Fabrics > Fabrics page of IMC Orchestrator to create a fabric. 2. Navigate to Fabric Configuration, enter the name and AS number, and then click Next.

3. Navigate to Device Management, click Auto Manage, and then direct to the Basic Settings tab.

4. In the Basic Settings tab, underlay protocol options include OSPF, IS-IS and BGP. In this example, select OSPF. Select No for VXLAN Service Preprovisioning for Access Devices, and click Apply.

5. In the IP Pool Settings tab, select the previously created DHCP server, management IP pool and VTEP IP pool (and underlay interconnection IP pool when BGP is selected for underlay protocol) in sequence, create an IP pool for auto deployment (must be in the same segment as the management network, and will be temporarily used for auto deployment), and then click Apply.

Page 32: IMC Orchestrator Solution

29

6. In the Device Configuration Templates tab, select the default Create Template. If there are existing configuration templates of other fabrics, you can apply the existing template to the current configuration page from Select Existing Template.

Template Name is customized (required). Description is the detailed information of the template (optional). NTP Server is the IP address of the network time server (optional). Template Role can be chosen from spine, aggregation and leaf, based on the actual network. Aggregation is currently only available for OSPF. BGP RR MAC is the bridge MAC of BGP RR devices planned by users. For multiple RRs, all the bridge MAC addresses of each RR must be entered, and separated by comma (,). If the RRs are deployed in stacks, the bridge MAC addresses of each member device must be entered, and separated by commas (,). BGP AS Number is automatically passed down from fabric upon its creation. BGP MD5 Password is optional.

NOTE: The bridge MAC of a box-type device is on the shell of the box, while the bridge MAC of a frame-type device is on the frame. For a logged-in device, you can view the MAC by executing the display lacp system-id command.

7. According to the selected template role, open the corresponding template to complete the configuration parameters.

Page 33: IMC Orchestrator Solution

30

Select from the pull-down menu of Control Protocol Template. Select a new or default control protocol template.

The default value of Border IRF Stacking is No. if Border IRF Stacking is enabled, Border MAC is required. Border MAC is the planned bridge MAC of border devices. Several MACs are allowed, and separated by comma (,). When border and spine are combined and when border devices are stacked, the MAC is required.

The default value of IRF Stacking is Yes, and if the role is deployed in a stack, the physical IRF links must be connected first. If the role is not deployed in IRF stack, No can be selected. Refer to Stacking and Aggregation for details about IRF.

The default value of Enable Whitelist is Yes. When the value is Yes, the automation deployment is only available for devices on the Device List. If the value is changed to No, all devices can be automatically deployed based on their configured roles.

You can select the Software Version and Software Patch from the pull-down menu or upload them (optional). During the automation deployment, the device version will be upgraded to the one configured in the template. For now, only the device version in ipe format and software patch in bin format are supported.

OSPF process ID is the OSPF process number (optional). If the number is not entered, the system default value is automatically filled with 1.

In the Command Segments dialog box, customized commands can be filled in (optional). Please add the command lines as appropriate and ends with the hashtag (#). The current version does not support automatic execution of ssh server enable and telnet server enable commands. for relevant requests, fill in the command segment.

For the leaf template, another option of Enable Aggregation is added with its default value of No. If the value is changed to Yes, you can set the Aggregation Mode to Dynamic or Static. Refer to Stacking and Aggregation for details about aggregation.

Page 34: IMC Orchestrator Solution

31

After the configuration of all templates, click Apply at the top right corner to return to the Device Management step and click Next. Navigate to the Group Management in Step 3, skip create and click OK to complete the fabric creation. If BGP was selected for underlay protocol, the ECMP configuration, Spine BGP AS Number and Leaf BGP AS Number are also required in the Spin and Leaf templates. Please fill them in based on the actual plans. When the virtual router is in configuration, the default maximum equivalent router must be used for VRF, and the ECMP field value in the Spine and Leaf templates must be integers greater than or equal to 4. Navigate to the Automation > Resource Pools > Auto Deployment > Automation Template page of IMC Orchestrator, you can view and change the created automation deployment templates and details in their role templates.

Configure device list During the configuration of the above automation template, if the value of Enable Whitelist is Yes, only devices in the list can be deployed automatically for controller incorporation. 1. Navigate to the Automation > Resource Pools > Auto Deployment page of IMC Orchestrator,

click Device List to enter the device list page and click Add.

Serial Number is the SN of the device by automation deployment.

Page 35: IMC Orchestrator Solution

32

Device Role is selected from spine or leaf based on planning. Device Tag is optional. This content will be issued to the device as the customized system

name. Its priority is higher than the automatically deployed "Role-IP" of the system. Network Entity Name only supports IS-IS protocol and must be matched with the parameter. Loopback0 Interface IP will act as the VTEP IP address of the device. If the underlay protocol is of OSPF or BGP, the Network Entity Name and Loopback0

Interface IP are optional. If the underlay protocol is of IS-IS, all the information in the device list must be filled in. And the

Network Entity Name and Loopback0 Interface IP of the device whitelist are required. Loopback5 Interface IP will act as the DCI VTEP IP address of the device to connect with the

data center. It is optional based on the network plan. When the device is in a border role, its Gateway Capacity is required. 2. Finally, click Apply and complete creating the device list.

WARNING! The SN on the device shell indicates its serial number. for logged-in devices, you can view the SN by executing the display device manuinfo command. When adding frame switches to the device list, please choose the SN of the frame. if you do not have the frame SN, please add multiple device list information using the serial number of each main processing unit. If the devices enabled Gateway Capacity are deployed automatically, the default role is a border and the role configuration in the template is invalid. A removed device from the device list to be automatically deployed again will maintain its original role. If you need to change the default role, please change it manually.

NOTE: The naming rules for the device system name are: the highest priority is given to the Device Label in the device whitelist, followed by the successful matching of border_mac in the configuration template. finally, the ROLE-X in the template. When the device is deployed, the device name will be in the format. The configuration information in the list will take effect whether the Enable Whitelist is on. When Enable Whitelist in the template is marked as Yes, only devices on the list can be deployed with the information indicated in the list. When Enable Whitelist is marked as No, all devices can be deployed. If the device is indicated in detail in the list, it will be deployed by the information. otherwise, it will be deployed by default settings.

Device automated deployment Automated deployment of spine and leaf

After the physical link is confirmed, all devices will restart without a startup configuration file.

After the device restarts, it enters the automated configuration process. The management interface automatically obtains the management IP address through DHCP, acquires the device tag file, reads the device role, and downloads the configuration template for the corresponding role for automated configuration.

The successful automated configuration printed by the control panel is as follows: Automatic configuration attempt: 2.

Interface used: M-GigabitEthernet0/0/0.

Enable DHCP client on M-GigabitEthernet0/0/0.

Page 36: IMC Orchestrator Solution

33

Set DHCP client identifier: 00e0fc026820

Obtained an IP address for M-GigabitEthernet0/0/0: 172.10.1.3. # IP address acquired by the management interface Obtained configuration file name ospf.template and TFTP server name 170.10.0.1.

Resolved the TFTP server name to 170.10.0.1.

INFO: Get device tag file device_tag.csv success. # acquiring device label file INFO: Read role leaf from tag file.

Successfully downloaded file ospf_leaf.template. # Download the configuration templateofcorresponding role Executing the configuration file. Please wait...

Automatic configuration successfully completed. # Automation configuration complete Line aux0 is available.

After automated configuration, the device will be automatically incorporated by IMC Orchestrator to the corresponding fabric, and its status will be changed into active, indicating that the automation deployment and incorporation are successful.

Border automated deployment Border devices are automatically deployed in leaf role. Gateway Capacity is selected during the configuration of border device list. When the border device is managed, its incorporation status is inactive. Only when it joins the border gateway group, can the status turn active. 1. Navigate to the Automation > Resource Pools > Devices > Border Device Group page of

the controller, click Add to create a new device group. Device Group Name is customized, Position is selected from Border Gateway, Fabric Interconnection and DC Interconnection as appropriate. Operation Mode is the service gateway group. The IP Address Pool List and VLAN Pool List can be selected from the default Address/VLAN pool or the existing pool.

2. Click Add Device under the Device Group Member, and add border devices to the gateway group then click Apply.

3. Return to Exchange Device label of fabric to view the device list. refresh the list and the status of the border devices will turn active.

4. When all device statuses turn active, it means that devices are incorporated by the controller.

Page 37: IMC Orchestrator Solution

34

Verify the configuration after automation deployment Normal template file issuance and automated configuration

Restart the network element without a startup configuration file, and the management interface of the element will acquire the management UP and TFTP server address automatically. The device downloads the device label file and the configuration template file corresponding to the role (template name_device_role.template) from the controller, and then loads the configuration on the device automatically. This template file is found on the device flash, and the current role displayed by the display vcf-fabric role command must match the expected settings.

Router accessibility among devices Automated deployment can automatically issue router configurations to achieve router accessibility for each device within the fabric. Execute the display bgp peer l2vpn evpn command to query neighbors, and the status must be "established".

Examples for underlay automated configuration In a Layer 3 network, the IP pool plans are the same as those above. When OSPF is selected for underlay protocol, leaf and border are deployed in stacks. the network is shown below:

Page 38: IMC Orchestrator Solution

35

After underlay automation deployment, the device configuration is exported as follows:

spine.txt

leaf.txt

border.txt

The above configurations are for reference only, and they vary in different devices and different network environments.

Device IRF and aggregation Underlay automation deployment supports automatic IRF and automatic aggregation of devices. Stacking or aggregation is at the users' discretion, during the configuration of automation templates.

Page 39: IMC Orchestrator Solution

36

NOTE: Referring to the process of configuring automation templates, when IRF Stacking is marked as Yes in the spine/leaf template, the device automation process will configure the device in IRF mode. Successful automated IRF requires: • Two IRF spine or leaf switches must have identical models and versions. • Stacking devices must have a physical connection for IRF and LLDP can detect the neighboring

interface of connecting link normally. • For two IRF devices, the one with the larger bridge MAC frame number will be automatically

deployed as 1 and that with the smaller bridge MAC frame number be 5. Referring to the process of configuring automation templates, when the leaf (currently only supports aggregation on the downlink interface of the leaf device) template is configured with Enable Aggregation as Yes and the corresponding Aggregation Mode (Dynamic/Static) is configured, the device automation process will configure the downlink interface of the leaf device as the aggregate interface. Successful automated aggregation requires: • LLDP service must be installed and enabled on the NIC of leaf downlink. • The two NICs on the server connect to the same leaf device or the two frames of the IRF leaf

devices. • By executing the dis lldp neighbor-information list command on leaf devices, the LLDP

neighbor of the switches can correctly detect the downlink server interface. And the results show correctly that the system name on the other end, instead of --.

When there are multiple physical links between leaf and spine, it is not automatically aggregated, but implemented in ECMP way.

The following is an example of configuring the IRF and aggregation of leaf devices. 1. For the leaf device configuration template, set Enable Aggregation and IRF Stacking to Yes.

Then, save the settings.

2. Underlying physical configuration: Ensure that the model and version numbers of the two devices to be stacked are identical, and execute the display device command to view. Plan the physical IRF interface and connect the IRF links. Please note that some devices have constraints in physical IRF interfaces. it is advised to use a high-speed interface that can be manually configured and verified on the device. If an unsupported interface is used as a IRF interface, the automated IRF will fail.

3. [Stacking] operation allows the two leaf devices to restart without the startup configuration file and the devices will be deployed automatically. When a device is automatically incorporated by the controller, one of the devices will receive the LLDP message sent by the device with the same role, and configure the interface as a IRF interface. then restart the device automatically to stack the management IP address and configuration startup of the master device, to complete the automated IRF of the two devices.

Page 40: IMC Orchestrator Solution

37

4. Execute the display irf link command on the device to query IRF status. The network element incorporation status can be viewed on the controller: the master device is shown active while the backup device is shown as inactive.

5. [Aggregation] When the leaf device receives the LLDP message sent by the downlink server, and the leaf device can correctly detect the system name of the downlink server, the aggregation will start automatically. Aggregation mode is determined by the configuration in the template (dynamic/static).

6. Execute the display link-aggregation verbose command on the device to query aggregation status. The automated aggregation is finished.

Replace a device 1. Navigate to the Automation > Configuration Deployment > Device Maintenance > Device

page of IMC Orchestrator. 2. Click Replace in the Actions column to enter the guide for device replacement. 3. On the faulty device setting page, specify the device serial number and configuration file. If the

faulty device is an IRF fabric, you can replace the faulty member device or the entire fabric. Then click Next.

4. On the replacement device setting page, specify the device name or serial number, and then click Next. If the replacement device is managed by the controller, you can select it by its name. If the replacement device is not managed by the controller, select it by its serial number.

5. On the fault device checklist page, the system checks for the incompliant settings. The check result for the check items that do not support this check is displayed. Click Replace to start the replacement. During the replacement progress, you can click Cancel to cancel the task. When 25% of the task has been completed, the task cannot be canceled.

Page 41: IMC Orchestrator Solution

38

Page 42: IMC Orchestrator Solution

39

WARNING! When specifying a replacement device by element name, the replacement device must be incorporated by the controller and in the Active state. otherwise, it will not be selected in the replacement device list. About the status of the device, view it on the Automation > Resource Pools > Devices > Physical Devices page. When a replacement device is specified by serial number, the replacement device does not need to be incorporated by the controller in advance. After the replacement configuration is completed on the controller, a restart without startup configuration is required for the new replacement device and the automation deployment starts the replacement process. If Enable Whitelist is on when the template is in configuration, the replacement device is required in the device list before it can be deployed automatically. The replacement device role needs to be the same as the faulty device to successfully replace the faulty device. The replacement device and the faulty device must have identical model and software version and the Device Type must be the same as well. Fault replacement of spine devices is currently not available. The faulty device must be powered off throughout the replacement process. For overall replacement on IRF devices, the IRF interfaces of the replacement devices must be the same as those of the faulty devices. For the replacement of a single device in the IRF environment, the normal operation of the other device must be maintained. Please use overall replacement when both devices are faulty. Do not add the replacement backup device in the IRF mode to the network environment of the current controller.

Page 43: IMC Orchestrator Solution

40

Manual configuration of the DRNI network with a VXLAN tunnel IPL

The network IPL function is undertaken by a specific VXLAN tunnel, and a specific physical link is not necessary.

For larger sites (VXLAN quantity over 4k), this network mode is required.

For sites that only exist on network overlay, this network mode is optional.

For sites that exist on the host, as a best practice, do not use this network mode.

Network diagram and interface description Figure 4 DRNI network with a VXLAN tunnel IPL

Device Interface IP Remarks

Spine

Loopback0 125.0.0.1/32 VTEP IP

XGE1/1/0/1 Loopback 0 interface address Connect Leaf 1-1

XGE1/1/0/2 Loopback 0 interface address Connect Leaf 1-2

XGE1/2/0/1 Loopback 0 interface address Connect Leaf 2-1

XGE1/2/0/2 Loopback 0 interface address Connect Leaf 2-2

Leaf 1-1

Loopback0 68.0.1.1/32 VTEP IP

Loopback1 68.0.1.5/32 DRNI group IP

XGE1/0/1 Loopback 0 interface address Connect spine

XGE1/0/2 AC interface, DR access

XGE1/0/3 AC interface, single-homed access

Page 44: IMC Orchestrator Solution

41

Device Interface IP Remarks

Leaf 1-2

Loopback0 68.0.1.2/32 VTEP IP

Loopback1 68.0.1.5/32 DRNI group IP

XGE1/0/1 Loopback 0 interface address Connect spine

XGE1/0/2 AC interface, DR access

Leaf 2-1

Loopback0 68.0.2.1/32 VTEP IP

Loopback1 68.0.2.5/32 DRNI group IP

XGE1/0/1 Loopback 0 interface address Connect spine

XGE1/0/2 AC interface, DR access

Leaf 2-2

Loopback0 68.0.2.2/32 VTEP IP

Loopback1 68.0.2.5/32 DRNI group IP

XGE1/0/1 Loopback 0 interface address Connect spine

XGE1/0/2 AC interface, DR access

XGE1/0/3 AC interface, single-homed access

Configure EVPN The underlay network configuration of the leaf devices in the DRNI network can be deployed automatically, and this section only describes the manual configuration. For the configuration of controller incorporated leaf network element, which is not listed here, see IMC Orchestrator Solution Service Configuration Guide for Scenarios Without Cloud.

Configure spine devices 1. Configure IP addresses for all the interfaces, and enable OSPF.

#

interface LoopBack0

ip address 125.0.0.1 255.255.255.255

ospf 1 area 0.0.0.0

#

ospf 1 router-id 125.0.0.1

non-stop-routing

area 0.0.0.0

#

interface Ten-GigabitEthernet1/1/0/1

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

Page 45: IMC Orchestrator Solution

42

#

interface Ten-GigabitEthernet1/1/0/2

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface Ten-GigabitEthernet1/2/0/1

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

interface Ten-GigabitEthernet1/2/0/2

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

2. Enable L2VPN, configure BGP to advertise EVPN routes, and configure the device as a router reflector. #

l2vpn enable

#

bgp 100

non-stop-routing

router-id 125.0.0.1

group evpn

peer 68.0.1.1 group evpn

peer 68.0.1.2 group evpn

peer 68.0.2.1 group evpn

peer 68.0.2.2 group evpn

#

address-family l2vpn evpn

undo policy vpn-target

peer evpn enable

peer evpn reflect-client

#

Configure Leaf 1-1 1. Configure IP addresses for all the interfaces, and enable OSPF.

Page 46: IMC Orchestrator Solution

43

OSPF is also required on Loopback 0 interface, because even though the uplink interfaces using Loopback 0 interface IP address are configured with OSPF, the Loopback 0 interface IP address can still join OSPF when all the uplink interfaces are down. interface LoopBack0

ip address 68.0.1.1 255.255.255.255

ospf 1 area 0.0.0.0

#

The virtual IP address of DRNI is configured at Loopback 2 interface, since the underlay network configuration of IMC Orchestrator will use Loopback 1 interface for other purposes. interface LoopBack2

ip address 68.0.1.5 255.255.255.255

ospf 1 area 0.0.0.0

#

ospf 1 router-id 68.0.1.1

non-stop-routing

area 0.0.0.0

The stub-router configuration is correlated to convergence rate and its value is determined based on the site environment. stub-router include-stub on-startup 60

#

interface Ten-GigabitEthernet1/0/1

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

2. Enable L2VPN, configure BGP to advertise EVPN routes. #

l2vpn enable

#

bgp 100

non-stop-routing

router-id 68.0.1.1

peer 125.0.0.1 as-number 100

peer 125.0.0.1 connect-interface LoopBack0

#

address-family l2vpn evpn

peer 125.0.0.1 enable

#

#

3. Execute the following command on the 12900E switches acting as leaf devices. mac-address mac-learning ingress

Configure Leaf 1-2 1. Configure IP addresses for all the interfaces, and enable OSPF.

Page 47: IMC Orchestrator Solution

44

OSPF is also required on Loopback 0 interface, because even though the uplink interfaces using Loopback 0 interface IP address are configured with OSPF, the Loopback 0 interface IP address can still join OSPF when all the uplink interfaces are down. #

interface LoopBack0

ip address 68.0.1.2 255.255.255.255

ospf 1 area 0.0.0.0

#

The virtual IP address of DRNI is configured at Loopback 2 interface, since the underlay network configuration of IMC Orchestrator will use Loopback 1 interface for other purposes. interface LoopBack2

ip address 68.0.1.5 255.255.255.255

ospf 1 area 0.0.0.0

#

ospf 1 router-id 68.0.1.2

non-stop-routing

area 0.0.0.0

stub-router include-stub on-startup 60

#

interface Ten-GigabitEthernet1/0/1

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

2. Enable L2VPN, configure BGP to advertise EVPN routes. #

l2vpn enable

#

bgp 100

non-stop-routing

router-id 68.0.1.2

peer 125.0.0.1 as-number 100

peer 125.0.0.1 connect-interface LoopBack0

#

address-family l2vpn evpn

peer 125.0.0.1 enable

#

#

3. Execute the following command on the 12900E switches acting as leaf devices. mac-address mac-learning ingress

Configure Leaf 2-1 1. Configure IP addresses for all the interfaces, and enable OSPF.

Page 48: IMC Orchestrator Solution

45

OSPF is also required on Loopback 0 interface, because even though the uplink interfaces using Loopback 0 interface IP address are configured with OSPF, the Loopback 0 interface IP address can still join OSPF when all the uplink interfaces are down. #

interface LoopBack0

ip address 68.0.2.1 255.255.255.255

ospf 1 area 0.0.0.0

#

The virtual IP address of DRNI is configured at Loopback 2 interface, since the underlay network configuration of IMC Orchestrator will use Loopback 1 interface for other purposes. interface LoopBack2

ip address 68.0.2.5 255.255.255.255

ospf 1 area 0.0.0.0

#

ospf 1 router-id 68.0.2.1

non-stop-routing

area 0.0.0.0

stub-router include-stub on-startup 60

#

interface Ten-GigabitEthernet1/0/1

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

2. Enable L2VPN, configure BGP to advertise EVPN routes. #

l2vpn enable

#

bgp 100

non-stop-routing

router-id 68.0.2.1

peer 125.0.0.1 as-number 100

peer 125.0.0.1 connect-interface LoopBack0

#

address-family l2vpn evpn

peer 125.0.0.1 enable

#

#

3. Execute the following command on the 12900E switches acting as leaf devices. mac-address mac-learning ingress

Configure Leaf 2-2 1. Configure IP addresses for all the interfaces, and enable OSPF.

Page 49: IMC Orchestrator Solution

46

OSPF is also required on Loopback 0 interface, because even though the uplink interfaces using Loopback 0 interface IP address are configured with OSPF, the Loopback 0 interface IP address can still join OSPF when all the uplink interfaces are down. #

interface LoopBack0

ip address 68.0.2.2 255.255.255.255

ospf 1 area 0.0.0.0

#

The virtual IP address of DRNI is configured at Loopback 2 interface, since the underlay network configuration of IMC Orchestrator will use Loopback 1 interface for other purposes. interface LoopBack2

ip address 68.0.2.5 255.255.255.255

ospf 1 area 0.0.0.0

#

ospf 1 router-id 68.0.2.2

non-stop-routing

area 0.0.0.0

stub-router include-stub on-startup 60

#

interface Ten-GigabitEthernet1/0/1

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

#

2. Enable L2VPN, configure BGP to advertise EVPN routes. #

l2vpn enable

#

bgp 100

non-stop-routing

router-id 68.0.2.2

peer 125.0.0.1 as-number 100

peer 125.0.0.1 connect-interface LoopBack0

#

address-family l2vpn evpn

peer 125.0.0.1 enable

#

#

3. Execute the following command on the 12900E switches acting as leaf devices. mac-address mac-learning ingress

Verify the configuration Verify that a spine device has established BGP EVPN peer relationships with four leaf devices. [spine_182.152.3.121]display bgp peer l2vpn evpn

...

Page 50: IMC Orchestrator Solution

47

*68.0.1.1 100 82 98 0 5 01:10:57 Established

*68.0.1.2 100 13372 16079 0 6 0219h01m Established

*68.0.2.1 100 15420 15228 0 3 0219h06m Established

*68.0.2.2 100 15631 17578 0 3 0219h07m Established

...

Configure DRNI DRNI requires manual configuration.

Configure Leaf 1-1 1. Configure a reserved VXLAN. Each group of DRNI devices requires the same reserved VXLAN

configuration. as a best practice, configure all leaf devices with the same reserved VXLAN configuration. #

reserved vxlan 60000

#

2. Configure EVPN DRNI and configure the virtual VTEP address as Loopback 2 interface IP address. #

evpn drni group 68.0.1.5

evpn global-mac 0000-0000-0001

#

3. Configure the DR system. In the DR group, the same system MAC address and different system numbers must be configured. #

drni system-mac 0001-0001-0001

drni system-number 1

drni system-priority 10

#

4. Create a VXLAN tunnel as the IPL between the Loopback 0 interfaces of the DR member devices. The tunnel interface number must be within the reserved tunnel range of the controller. otherwise, the controller will report configuration inconsistency.

Page 51: IMC Orchestrator Solution

48

For example, when the reserved tunnel number is 256, the VXLAN tunnel interface number must be lower than 256. This step uses 100 as an example. #

interface Tunnel100 mode vxlan

port drni intra-portal-port 1

source 68.0.1.1

destination 68.0.1.2

tunnel tos 100

#

5. Create Layer 2 aggregate interface 1, add a physical interface to it, configure it to dynamic mode, join the DR group, and configure it as an AC interface. #

interface Bridge-Aggregation1

link-aggregation mode dynamic

port drni group 1

vtep access port

#

interface Ten-GigabitEthernet1/0/2

port link-mode bridge

port link-aggregation group 1

6. Configure the single-homed interface as an AC interface. #

interface Ten-GigabitEthernet1/0/3

port link-mode bridge

vtep access port

#

• The single-homed server none-DRNI devices and DRNI devices are interconnected and it is required to configure on the none-DRNI device: vxlan default-decapsulation source interface LoopBack0.

• For the none-asymmetric DRNI device configuration, the single mount is not interconnected. Circumvention: Configure an identical VSI on each DRNI device, as long as the VSI does not conflict with the service VSI. For example, DRNI-A is configured with VSI100, and DRNI-B with VSI200, they cannot connect. you can consider manually configuring a VSI300 on the DRNI-A and DRNI-B. VSI100 and VSI200 are interconnected. vsi drni-300

vxlan 300

evpn encapsulation vxlan

route-distinguisher auto

vpn-target auto export-extcommunity

vpn-target auto import-extcommunity

• The timer aging of the MAC address on the single-homed DRNI device is 26 minutes. mac-address timer aging 1560

BUM messages received by the DRNI device from the physical tunnel are discarded. The timer aging of the MAC address on the single-homed DRNI device is 26 minutes, which is one minute longer than APP aging time.

7. Turn off the static source MAC check at the uplink interface. #

interface Ten-GigabitEthernet1/0/1

undo mac-address static source-check enable

Page 52: IMC Orchestrator Solution

49

#

8. Configure monitor link and the interface connected to RR is the uplink interface and that connecting AC physical interface is the downlink interface. #

monitor-link group 1

The following timer correlates to the convergence rate, the value is determined by the quantities and scales of the environmental MAC and ARP table items. downlink up-delay 230

port Ten-GigabitEthernet1/0/1 uplink

port Ten-GigabitEthernet1/0/2 downlink

port Ten-GigabitEthernet1/0/3 downlink

#

9. Configure Loopback 0 interface, uplink interface, and VSI interface as DRNI MAD exclude interface. #

drni mad exclude interface LoopBack0

drni mad exclude interface Ten-GigabitEthernet1/0/1

drni mad exclude interface vsi-interface xxxx

#

Or another configuration mode: drni mad default-action none

For DRNI network with a VXLAN tunnel IPL, the drni mad include command is not required. if it is configured, the included interfaces might flap. #

Configure Leaf 1-2 1. Configure a reserved VXLAN. Each group of DRNI devices requires the same reserved VXLAN

configuration. as a best practice, configure all leaf devices with the same reserved VXLAN configuration. #

reserved vxlan 60000

#

2. Configure EVPN DRNI and configure the virtual VTEP address as Loopback 2 interface IP address. #

evpn drni group 68.0.1.5

evpn global-mac 0000-0000-0001

#

3. Configure the DR system. In the DR group, the same system MAC address and different system numbers must be configured. #

drni system-mac 0001-0001-0001

drni system-number 2

drni system-priority 10

#

4. Create a VXLAN tunnel as the IPL between the Loopback 0 interfaces of the DR member devices.

Page 53: IMC Orchestrator Solution

50

The tunnel interface number must be within the reserved tunnel range of the controller. otherwise, the controller will report configuration inconsistency. #

interface Tunnel100 mode vxlan

port drni intra-portal-port 1

source 68.0.1.2

destination 68.0.1.1

tunnel tos 100

#

5. Create Layer 2 aggregate interface 1, add a physical interface to it, configure it to dynamic mode, join the DR group, and configure it as an AC interface. #

interface Bridge-Aggregation1

link-aggregation mode dynamic

port drni group 1

vtep access port

#

interface Ten-GigabitEthernet1/0/2

port link-mode bridge

port link-aggregation group 1

6. Turn off the static source MAC check at the uplink interface. #

interface Ten-GigabitEthernet1/0/1

undo mac-address static source-check enable#

#

7. Configure monitor link and the interface connected to RR is the uplink interface and that connecting AC physical interface is the downlink interface. #

monitor-link group 1

downlink up-delay 230

port Ten-GigabitEthernet1/0/1 uplink

port Ten-GigabitEthernet1/0/2 downlink

#

8. Configure Loopback 0 interface, uplink interface, and VSI interface as DRNI MAD exclude interface. #

drni mad exclude interface LoopBack0

drni mad exclude interface Ten-GigabitEthernet1/0/1

drni mad exclude interface vsi-interface xxxx

#

Or another configuration mode: drni mad default-action none

For DRNI network with a VXLAN tunnel IPL DRNI, the drni mad include command is not required. if it is configured, the included interfaces might flap. #

Page 54: IMC Orchestrator Solution

51

Configure Leaf 2-1 1. Configure a reserved VXLAN. Each group of DRNI devices requires the same reserved VXLAN

configuration. as a best practice, configure all leaf devices with the same reserved VXLAN configuration. #

reserved vxlan 60000

#

2. Configure EVPN DRNI and configure the virtual VTEP address as Loopback 1 interface IP address. #

evpn drni group 68.0.2.5

evpn global-mac 0000-0000-0002

#

3. Configure the DR system. In the DR group, the same system MAC address and different system numbers must be configured. #

drni system-mac 0002-0002-0002

drni system-number 1

drni system-priority 10

#

4. Create a VXLAN tunnel as the IPL between the Loopback 0 interfaces of the DR member devices. The tunnel interface number must be within the reserved tunnel range of the controller. otherwise, the controller will report configuration inconsistency. #

interface Tunnel100 mode vxlan

port drni intra-portal-port 1

source 68.0.2.1

destination 68.0.2.2

tunnel tos 100

#

5. Create Layer 2 aggregate interface 1, add a physical interface to it, configure it to dynamic mode, join the DR group, and configure it as an AC interface. #

interface Bridge-Aggregation1

link-aggregation mode dynamic

port drni group 1

vtep access port

#

interface Ten-GigabitEthernet1/0/2

port link-mode bridge

port link-aggregation group 1

6. Turn off the static source MAC check at the uplink interface. #

interface Ten-GigabitEthernet1/0/1

undo mac-address static source-check enable#

#

Page 55: IMC Orchestrator Solution

52

7. Configure monitor link and the interface connected to RR is the uplink interface and that connecting AC physical interface is the downlink interface. #

monitor-link group 1

downlink up-delay 230

port Ten-GigabitEthernet1/0/1 uplink

port Ten-GigabitEthernet1/0/2 downlink

#

8. Configure Loopback 0 interface, uplink interface, and VSI interface as DRNI MAD exclude interface. #

drni mad exclude interface LoopBack0

drni mad exclude interface Ten-GigabitEthernet1/0/1

drni mad exclude interface vsi-interface xxxx

#

Or another configuration mode: drni mad default-action none

For DRNI network with a VXLAN tunnel IPL DRNI, the drni mad include command is not required. if it is configured, the included interfaces might flap. #

Configure Leaf 2-2 1. Configure a reserved VXLAN. Each group of DRNI devices requires the same reserved VXLAN

configuration. as a best practice, configure all leaf devices with the same reserved VXLAN configuration. #

reserved vxlan 60000

#

2. Configure EVPN DRNI and configure the virtual VTEP address as Loopback 2 interface IP address. #

evpn drni group 68.0.2.5

evpn global-mac 0000-0000-0002

#

3. Configure the DR system. In the DR group, the same system MAC address and different system numbers must be configured. #

drni system-mac 0002-0002-0002

drni system-number 2

drni system-priority 10

#

4. Create a VXLAN tunnel as the IPL between the Loopback 0 interfaces of the DR member devices. The tunnel interface number must be within the reserved tunnel range of the controller. otherwise, the controller will report configuration inconsistency. #

interface Tunnel100 mode vxlan

port drni intra-portal-port 1

source 68.0.2.2

Page 56: IMC Orchestrator Solution

53

destination 68.0.2.1

tunnel tos 100

#

5. Create Layer 2 aggregate interface 1, add a physical interface to it, configure it to dynamic mode, join the DR group, and configure it as an AC interface. #

interface Bridge-Aggregation1

link-aggregation mode dynamic

port drni group 1

vtep access port

#

interface Ten-GigabitEthernet1/0/2

port link-mode bridge

port link-aggregation group 1

6. Configure the single-homed interface as an AC interface. #

interface Ten-GigabitEthernet1/0/3

port link-mode bridge

vtep access port

#

The single-homed server none-DRNI devices and DRNI devices are interconnected and it is required to configure on the none-DRNI device: vxlan default-decapsulation source interface LoopBack0.

For the none-asymmetric DRNI device configuration, the single mount is not interconnected. Circumvention: Configure an identical VSI on each DRNI device, as long as the VSI does not conflict with the service VSI. For example, DRNI-A is configured with VSI100, and DRNI-B with VSI200, they cannot connect. you can consider manually configuring a VSI300 on the DRNI-A and DRNI-B. VSI100 and VSI200 are interconnected. vsi drni-300

vxlan 300

evpn encapsulation vxlan

route-distinguisher auto

vpn-target auto export-extcommunity

vpn-target auto import-extcommunity

The timer aging of the MAC address on the single-homed DRNI device is 26 minutes. mac-address timer aging 1560

BUM messages received by the DRNI device from the physical tunnel are discarded. The timer aging of the MAC address on the single-homed DRNI device is 26 minutes, which is one minute longer than APP aging time.

7. Turn off the static source MAC check at the uplink interface. #

interface Ten-GigabitEthernet1/0/1

undo mac-address static source-check enable

#

8. Configure monitor link and the interface connected to RR is the uplink interface and that connecting AC physical interface is the downlink interface. #

monitor-link group 1

downlink up-delay 230

Page 57: IMC Orchestrator Solution

54

port Ten-GigabitEthernet1/0/1 uplink

port Ten-GigabitEthernet1/0/2 downlink

port Ten-GigabitEthernet1/0/3 downlink

#

9. Configure Loopback 0 interface, uplink interface, and VSI interface as DRNI MAD exclude interface. #

drni mad exclude interface LoopBack0

drni mad exclude interface Ten-GigabitEthernet1/0/1

drni mad exclude interface vsi-interface xxxx

#

Or another configuration mode: drni mad default-action none

For DRNI network with a VXLAN tunnel IPL DRNI, the drni mad include command is not required. if it is configured, the included interfaces might flap. #

Verify the configuration Execute the following commands on the leaf device:

The display drni role command, displaying 1 Primary device and 1 Secondary device.

The display drni summary command, displaying the status of IPP and DR interface as UP.

Examples: <leaf1_1_182.152.3.111>display drni role

DR Role priority Bridge Mac Configured role Effective role

Local 32768 90e7-1062-2aa9 Secondary Secondary

Peer 32768 00e0-fc00-6820 Primary Primary

<leaf1_1_182.152.3.111>dis drni summary

Flags: A -- Aggregate interface down, B -- Device role was None,

C -- No peer DR interface configured, D -- DRNI MAD check,

E -- Configuration consistency check failed

IPP: Tun100

IPP state (cause): UP

Keepalive link state: UP

DR interface information

DR interface DR group ID State (Cause) Remaining down time (s)

BAGG1 1 UP -

Restrictions and guidelines SN Restrictions and guidelines

1 The mode of the IPL tunnel interface must be created manually, in the mode of VXLAN, and it cannot be connected to VSI. The automatically created VXLAN tunnel cannot act as IPL.

Page 58: IMC Orchestrator Solution

55

SN Restrictions and guidelines

2 The IPL tunnel source address must be the same as that used when establishing the BGP peer.

3 The STP function should be disabled on the Layer 2 interface corresponding to the IPL tunnel to avoid blocking the DR interface.

4 Disable BUM suppression. otherwise, ARP packets will be lost, resulting in traffic failure.

5 DR interfaces need to be configured with symmetric VLAN-VXLAN mapping. for single-homed devices, the configuration request is not necessary.

6 The IPP tunnel-related interfaces need to be configured as MAD exclude interfaces, including Loopback 0 interfaces and uplink interfaces.

Page 59: IMC Orchestrator Solution

56

Manual deployment of DRNI network with an Ethernet aggregate link IPL

This network uses a dedicated physical link as the IPL and keepalive link.

For sites that only exist on the network overlay, as a best practice, this network mode is used.

For sites that exist on the host, this network mode is required.

Network diagram and interface description Figure 5 DRNI Network with an Ethernet aggregate link IPL

Device Interface IP Remarks

Spine

Loopback0 125.0.0.1/32 VTEP IP

XGE1/1/0/1 Loopback 0 interface address Connect Leaf 1-1

XGE1/1/0/2 Loopback 0 interface address Connect Leaf 1-2

XGE1/2/0/1 Loopback 0 interface address Connect Leaf 2-1

XGE1/2/0/2 Loopback 0 interface address Connect Leaf 2-2

Leaf 1-1

Loopback0 68.0.1.1/32 VTEP IP

Loopback1 68.0.1.5/32 DR group IP

XGE1/0/1 Loopback 0 interface address Connect spine

XGE1/0/2 AC interface, DR access

XGE1/0/3 AC interface, single-homed access

XGE1/0/4 68.1.1.1/24 Keepalive

Page 60: IMC Orchestrator Solution

57

Device Interface IP Remarks XGE1/0/5 IPP

Leaf 1-2

Loopback0 68.0.1.2/32 VTEP IP

Loopback1 68.0.1.5/32 DR group IP

XGE1/0/1 Loopback 0 interface address Connect spine

XGE1/0/2 AC interface, DR access

XGE1/0/4 68.1.1.2/24 Keepalive

XGE1/0/5 IPP

Leaf 2-1

Loopback0 68.0.2.1/32 VTEP IP

Loopback1 68.0.2.5/32 DR group IP

XGE1/0/1 Loopback 0 interface address Connect spine

XGE1/0/2 AC interface, DR access

XGE1/0/4 68.1.2.1/24 Keepalive

XGE1/0/5 IPP

Leaf 2-2

Loopback0 68.0.2.2/32 VTEP IP

Loopback1 68.0.2.5/32 DR group IP

XGE1/0/1 Loopback 0 interface address Connect spine

XGE1/0/2 AC interface, DR access

XGE1/0/3 AC interface, single-homed access

XGE1/0/4 68.1.2.2/24 Keepalive

XGE1/0/5 IPP

Configure EVPN The EVPN configuration is the same as that for DRNI network without a peer link.

Configure DRNI Configure Leaf 1-1

1. After executing this command, the AC message matching rule on IPL will be generated by VXLAN ID mapping. After creating a VXLAN on VTEP, an AC is automatically generated on the IPL, which is connected to the VSI corresponding to the VXLAN. #

l2vpn drni peer-link ac-match-rule vxlan-mapping

#

If the device is of 125X, the l2vpn drni peer-link tunnel source X.X.X.X destination Y.Y.Y.Y command is required, and the source and target addresses are vtep of local and opposite ends on DRNI respectively

Page 61: IMC Orchestrator Solution

58

2. Configure EVPN DRNI, and configure the virtual VTEP address as Loopback 1 interface IP address, local and remote, filling in the Loopback 0 address of the local end and the opposite end respectively. #

evpn drni group 68.0.1.5

evpn global-mac 0000-0000-0001

evpn drni local 68.0.1.1 remote 68.0.1.2

#

The evpn drni local command is optional. Configure an IP address for the keepalive interface. 3. Configure an IP address for the keepalive interface. 1. #

interface Ten-GigabitEthernet1/0/4

port link-mode route

ip address 68.1.1.1 255.255.255.0

#

4. Configure DR system In the DR group, the same system MAC address and different system numbers must be configured. The source and target interface of keepalive uses the keepalive interface addresses of local and peer ends. #

drni system-mac 0001-0001-0001

drni system-number 1

drni system-priority 10

drni keepalive ip destination 68.1.1.2 source 68.1.1.1

drni restore-delay 180

#

5. Create Layer 2 aggregate interface 5, configure it as an IPP, and turn off source MAC checking. #

interface bridge-aggregation 5

port link-type trunk

port trunk permit vlan all

link-aggregation mode dynamic

port drni intra-portal-port 1

undo mac-address static source-check enable

#

interface Ten-GigabitEthernet1/0/5

port link-type trunk

port trunk permit vlan all

port link-aggregation group 5

#

6. Create Layer 2 aggregate interface 1, add a physical interface to it, configure it to dynamic mode, join the DR group, and configure it as an AC interface. #

interface Bridge-Aggregation1

link-aggregation mode dynamic

port drni group 1

vtep access port

#

interface Ten-GigabitEthernet1/0/2

port link-mode bridge

Page 62: IMC Orchestrator Solution

59

port link-aggregation group 1

7. Configure the single-homed interface as an AC interface. #

interface Ten-GigabitEthernet1/0/3

port link-mode bridge

vtep access port

#

The single-homed server none-DRNI devices and DRNI devices are interconnected and it is required to configure on the none-DRNI device: vxlan default-decapsulation source interface LoopBack0.

For the none-asymmetric DRNI device configuration, the single mount is not interconnected. Circumvention: Configure an identical VSI on each DRNI device, as long as the VSI does not conflict with the service VSI. For example, DRNI-A is configured with VSI100, and DRNI-B with VSI200, they cannot connect. you can consider manually configuring a VSI300 on the DRNI-A and DRNI-B. VSI100 and VSI200 are interconnected. vsi drni-300

vxlan 300

evpn encapsulation vxlan

route-distinguisher auto

vpn-target auto export-extcommunity

vpn-target auto import-extcommunity

The timer aging of the MAC address on the single-homed DRNI device is 26 minutes. mac-address timer aging 1560

BUM messages received by the DRNI device from the physical tunnel are discarded. The timer aging of the MAC address on the single-homed DRNI device is 26 minutes, which is one minute longer than APP aging time.

8. Configure the keepalive interface and the VSI interface to MAD exclude interfaces. #

drni mad exclude interface Ten-GigabitEthernet1/0/4

drni mad exclude interface vsi-interface xxxx

#

As a best practice, use the include interface. Configure the uplink and downlink interfaces as the MAD included interfaces. drni mad default-action none

drni mad include interface Ten-GigabitEthernet1/0/1

drni mad include interface Ten-GigabitEthernet1/0/2

#

9. Turn off the static source MAC check at the uplink interface. #

interface Ten-GigabitEthernet1/0/1

undo mac-address static source-check enable

#

10. Create a VLAN interface on the IPL, configure the IP address, and enable OSPF. As a best practice, the number of VLAN interface must be consistent with the PVID of DR interface. #

interface Vlan-interface1

ip address 6.0.1.1 255.255.255.0

ospf 1 area 0.0.0.0

#

Page 63: IMC Orchestrator Solution

60

The configured values of IP addresses must meet two conditions: (1) in the same segment as the IPP address of the opposite end device on DRNI. (2) in the different segment than that of VTEP. Otherwise, the reliability can be compromised in the calico container connecting DRNI scenario or the DRNI single-homed scenario.

11. If host overlay service exists, enable VRRP on the IPL VLAN interface as the gateway for host overlay. #

interface Vlan-interface1

vrrp vrid 1 virtual-ip 6.0.1.5

#

Configure Leaf 1-2 1. After executing this command, the AC message matching rule on IPL will be generated by

VXLAN ID mapping. After creating a VXLAN on VTEP, an AC is automatically generated on the IPL, which is connected to the VSI corresponding to the VXLAN. #

l2vpn drni peer-link ac-match-rule vxlan-mapping

#

If the device is of 125X, the l2vpn drni peer-link tunnel source X.X.X.X destination Y.Y.Y.Y command is required, and the source and target addresses are vtep of local and opposite ends on DRNI respectively

2. Configure EVPN DRNI, and configure the virtual VTEP address as Loopback 1 interface IP address, local and remote, filling in the Loopback 0 address of the local end and the opposite end respectively. #

evpn drni group 68.0.1.5

evpn global-mac 0000-0000-0001

evpn drni local 68.0.1.2 remote 68.0.1.1

#

The evpn drni local command is optional. 3. Configure an IP address for the keepalive interface.

#

interface Ten-GigabitEthernet1/0/4

port link-mode route

ip address 68.1.1.2 255.255.255.0

#

4. Configure the DR system. In the DR group, the same system MAC address and different system numbers must be configured. The source and target interface of keepalive uses the keepalive interface addresses of local and peer ends. #

drni system-mac 0001-0001-0001

drni system-number 2

drni system-priority 10

drni keepalive ip destination 68.1.1.1 source 68.1.1.2

drni restore-delay 180

#

5. Create Layer 2 aggregate interface 5, configure it as an IPP, and turn off source MAC checking. #

interface bridge-aggregation 5

Page 64: IMC Orchestrator Solution

61

port link-type trunk

port trunk permit vlan all

link-aggregation mode dynamic

port drni intra-portal-port 1

undo mac-address static source-check enable

#

interface Ten-GigabitEthernet1/0/5

port link-type trunk

port trunk permit vlan all

port link-aggregation group 5

#

6. Create Layer 2 aggregate interface 1, add a physical interface to it, configure it to dynamic mode, join the DR group, and configure it as an AC interface. #

interface Bridge-Aggregation1

link-aggregation mode dynamic

port drni group 1

vtep access port

#

interface Ten-GigabitEthernet1/0/2

port link-mode bridge

port link-aggregation group 1

7. Configure the keepalive interface and the VSI interface to MAD exclude interfaces. #

drni mad exclude interface Ten-GigabitEthernet1/0/4

drni mad exclude interface vsi-interface xxxx

#

As a best practice, use the include interface. Configure the uplink and downlink interfaces as the MAD included interfaces. drni mad default-action none

drni mad include interface Ten-GigabitEthernet1/0/1

drni mad include interface Ten-GigabitEthernet1/0/2

#

8. Turn off the static source MAC check at the uplink interface. #

interface Ten-GigabitEthernet1/0/1

undo mac-address static source-check enable

#

9. Create a VLAN interface on the IPL, configure the IP address, and enable OSPF. As a best practice, the number of VLAN interface must be consistent with the PVID of DR interface. #

interface Vlan-interface1

ip address 6.0.1.2 255.255.255.0

ospf 1 area 0.0.0.0

#

The configured values of IP addresses must meet two conditions: (1) in the same segment as the IP address of the opposite end device on DRNI. (2) in the different segment than that of VTEP. Otherwise, the reliability can be compromised in the calico container connecting DRNI scenario or the DRNI single-homed scenario.

Page 65: IMC Orchestrator Solution

62

If host overlay service exists, enable VRRP on the IPL VLAN interface as the gateway for host overlay.

#

interface Vlan-interface1

vrrp vrid 1 virtual-ip 6.0.1.5

#

Configure Leaf 2-1 1. After executing this command, the AC message matching rule on IPL will be generated by

VXLAN ID mapping. After creating a VXLAN on VTEP, an AC is automatically generated on the IPL, which is connected to the VSI corresponding to the VXLAN. #

l2vpn drni peer-link ac-match-rule vxlan-mapping

#

If the device is of 125X, the l2vpn drni peer-link tunnel source X.X.X.X destination Y.Y.Y.Y command is required, and the source and target addresses are vtep of local and opposite ends on DRNI respectively

2. Configure EVPN DRNI, and configure the virtual VTEP address as Loopback 1 interface IP address, local and remote, filling in the Loopback 0 address of the local end and the opposite end respectively. #

evpn drni group 68.0.2.5

evpn global-mac 0000-0000-0002

evpn drni local 68.0.2.1 remote 68.0.2.2

#

The evpn drni local command is optional. 3. Configure an IP address for the keepalive interface.

#

interface Ten-GigabitEthernet1/0/4

port link-mode route

ip address 68.1.2.1 255.255.255.0

#

4. Configure the DR system. In the DR group, the same system MAC address and different system numbers must be configured. The source and target interface of keepalive uses the keepalive interface addresses of local and peer ends. #

drni system-mac 0002-0002-0002

drni system-number 1

drni system-priority 10

drni keepalive ip destination 68.1.2.2 source 68.1.2.1

drni restore-delay 180

#

5. Create Layer 2 aggregate interface 5, configure it as an IPP, and turn off source MAC checking. #

interface bridge-aggregation 5

port link-type trunk

port trunk permit vlan all

link-aggregation mode dynamic

Page 66: IMC Orchestrator Solution

63

port drni intra-portal-port 1

undo mac-address static source-check enable

#

interface Ten-GigabitEthernet1/0/5

port link-type trunk

port trunk permit vlan all

port link-aggregation group 5

#

6. Create Layer 2 aggregate interface 1, add a physical interface to it, configure it to dynamic mode, join the DR group, and configure it as an AC interface. #

interface Bridge-Aggregation1

link-aggregation mode dynamic

port drni group 1

vtep access port

#

interface Ten-GigabitEthernet1/0/2

port link-mode bridge

port link-aggregation group 1

7. Configure the keepalive interface and the VSI interface to MAD exclude interfaces. #

drni mad exclude interface Ten-GigabitEthernet1/0/4

drni mad exclude interface vsi-interface xxxx

#

As a best practice, use the include interface. Configure the uplink and downlink interfaces as the MAD included interfaces. drni mad default-action none

drni mad include interface Ten-GigabitEthernet1/0/1

drni mad include interface Ten-GigabitEthernet1/0/2

#

8. Turn off the static source MAC check at the uplink interface. #

interface Ten-GigabitEthernet1/0/1

undo mac-address static source-check enable

#

9. Create a VLAN interface on the IPL, configure the IP address, and enable OSPF. As a best practice, the number of VLAN interface must be consistent with the PVID of DR interface. #

interface Vlan-interface1

ip address 6.0.2.1 255.255.255.0

ospf 1 area 0.0.0.0

#

The configured values of IP addresses must meet two conditions: (1) in the same segment as the IPP address of the opposite end device on DRNI. (2) in the different segment than that of VTEP. Otherwise, the reliability can be compromised in the calico container connecting DRNI scenario or the DRNI single-homed scenario. If host overlay service exists, enable VRRP on the IPL VLAN interface as the gateway for host overlay.

#

Page 67: IMC Orchestrator Solution

64

interface Vlan-interface1

vrrp vrid 1 virtual-ip 6.0.2.5

#

Configure Leaf 2-2 1. After executing this command, the AC message matching rule on IPL will be generated by

VXLAN ID mapping. After creating a VXLAN on VTEP, an AC is automatically generated on the IPL, which is connected to the VSI corresponding to the VXLAN. #

l2vpn drni peer-link ac-match-rule vxlan-mapping

#

If the device is of 125X, the l2vpn drni peer-link tunnel source X.X.X.X destination Y.Y.Y.Y command is required, and the source and target addresses are vtep of local and opposite ends on DRNI respectively

2. Configure EVPN DRNI, and configure the virtual VTEP address as Loopback 2 interface IP address, local and remote, filling in the Loopback 0 address of the local end and the opposite end respectively. #

evpn drni group 68.0.2.5

evpn global-mac 0000-0000-0002

evpn drni local 68.0.2.2 remote 68.0.2.1

#

The evpn drni local command is optional. 3. Configure an IP address for the keepalive interface.

#

interface Ten-GigabitEthernet1/0/4

port link-mode route

ip address 68.1.2.2 255.255.255.0

#

4. Configure the DR system. In the DR group, the same system MAC address and different system numbers must be configured. The source and target interface of keepalive uses the keepalive interface addresses of local and peer ends. #

drni system-mac 0002-0002-0002

drni system-number 2

drni system-priority 10

drni keepalive ip destination 68.1.2.1 source 68.1.2.2

drni restore-delay 180

#

5. Create Layer 2 aggregate interface 5, configure it as an IPP, and turn off source MAC checking. #

interface bridge-aggregation 5

port link-type trunk

port trunk permit vlan all

link-aggregation mode dynamic

port drni intra-portal-port 1

undo mac-address static source-check enable

#

Page 68: IMC Orchestrator Solution

65

interface Ten-GigabitEthernet1/0/5

port link-type trunk

port trunk permit vlan all

port link-aggregation group 5

#

6. Create Layer 2 aggregate interface 1, add a physical interface to it, configure it to dynamic mode, join the DR group, and configure it as an AC interface. #

interface Bridge-Aggregation1

link-aggregation mode dynamic

port drni group 1

vtep access port

#

interface Ten-GigabitEthernet1/0/2

port link-mode bridge

port link-aggregation group 1

7. Configure the single-homed interface as an AC interface. #

interface Ten-GigabitEthernet1/0/3

port link-mode bridge

vtep access port

#

The single-homed server none-DRNI devices and DRNI devices are interconnected and it is required to configure on the none-DRNI device: vxlan default-decapsulation source interface LoopBack0.

For the none-asymmetric DRNI device configuration, the single mount is not interconnected. Circumvention: Configure an identical VSI on each DRNI device, as long as the VSI does not conflict with the service VSI. For example, DRNI-A is configured with VSI100, and DRNI-B with VSI200, they cannot connect. you can consider manually configuring a VSI300 on the DRNI-A and DRNI-B. VSI100 and VSI200 are interconnected. vsi drni-300

vxlan 300

evpn encapsulation vxlan

route-distinguisher auto

vpn-target auto export-extcommunity

vpn-target auto import-extcommunity

The timer aging of the MAC address on the single-homed DRNI device is 26 minutes. mac-address timer aging 1560

BUM messages received by the DRNI device from the physical tunnel are discarded. The timer aging of the MAC address on the single-homed DRNI device is 26 minutes, which is one minute longer than APP aging time.

8. Configure the keepalive interface and the VSI interface to MAD exclude interfaces. #

drni mad exclude interface Ten-GigabitEthernet1/0/4

drni mad exclude interface vsi-interface xxxx

#

As a best practice, use the include interface. Configure the uplink and downlink interfaces as the MAD included interfaces. drni mad default-action none

Page 69: IMC Orchestrator Solution

66

drni mad include interface Ten-GigabitEthernet1/0/1

drni mad include interface Ten-GigabitEthernet1/0/2

#

9. Turn off the static source MAC check at the uplink interface. #

interface Ten-GigabitEthernet1/0/1

undo mac-address static source-check enable

#

10. Create a VLAN interface on the IPL, configure the IP address, and enable OSPF. As a best practice, the number of VLAN interface must be consistent with the PVID of DR interface. #

interface Vlan-interface1

ip address 6.0.2.2 255.255.255.0

ospf 1 area 0.0.0.0

#

The configured values of IP addresses must meet two conditions: (1) in the same segment as the IPP address of the opposite end device on DRNI. (2) in the different segment than that of VTEP. Otherwise, the reliability can be compromised in the calico container connecting DRNI scenario or the DRNI single-homed scenario. If host overlay service exists, enable VRRP on the IPL VLAN interface as the gateway for host overlay.

#

interface Vlan-interface1

vrrp vrid 1 virtual-ip 6.0.2.5

#

Page 70: IMC Orchestrator Solution

67

Verify the configuration Execute the following commands on the leaf device:

The display drni role command, displaying 1 Primary device and 1 Secondary device.

The display drni summary command, displaying the status of IPP and DR interface as UP.

Examples: <leaf1_1_182.152.3.111>display drni role

DR Role priority Bridge Mac Configured role Effective role

Local 32768 90e7-1062-2aa9 Secondary Secondary

Peer 32768 00e0-fc00-6820 Primary Primary

<leaf1_1_182.152.3.111>dis drni summary

Flags: A -- Aggregate interface down, B -- Device role was None,

C -- No peer DR interface configured, D -- DRNI MAD check,

E -- Configuration consistency check failed

IPP: Tun100

IPP state (cause): UP

Keepalive link state: UP

DR interface information

DR interface DR group ID State (Cause) Remaining down time (s)

BAGG1 1 UP -

Restrictions and guidelines SN Restrictions and guidelines

1 Execute the drni mad exclude interface command to configure the keepalive interface as the reserved interface.

2 DR interfaces and single-homed interfaces need to be configured with symmetric VLAN-VXLAN mapping.

3 The dynamic AC access mode under IPP of DRNI network with an Ethernet aggregate link IPL does not support Ethernet.

Page 71: IMC Orchestrator Solution

68

Automated deployment of DRNI network with an Ethernet aggregate link IPL Network diagram and interface description

Spine248.69

Leaf72

Leaf73

Router 99.244

Server 240.99

Border172

Border173

1/1/7 1/1/7

vmnic3 vmnic2

1/1/5 1/1/5

5/0/16

5/0/155/0/1

5/0/18

1/0/131/0/13

1/0/5 1/0/5

1/2/5 1/2/5

1/2/6 1/2/6

1/0/9 1/0/9

1/0/25 1/0/25

The test network description, followed by the network service configuration example content, is also based on this network diagram. At least two connecting links are required between two leaf/border devices, one as the keepalive interface and the other as the IPP.

Address pool and VLAN pool planning Table 1 Plans for address pools

Type Network address

Virtual device management network 13.6.6.0/24

Physical Management Network Management Network IP Pool: 11.1.0.0/24

Automated Deployment IP Pool: 21.1.0.0/24

Physical VTEP Network 31.1.0.0/24

Underlay Interconnection Network 41.1.0.0/24

Type Network address

Value range of VLAN pool plan is 100-200.

Page 72: IMC Orchestrator Solution

69

Basic configuration examples Install vDHCP components

Navigate to the System > Deployment page of IMC Orchestrator to install vDHCP components. Follow the wizard to select the supporting installation package and configure the appropriate parameters.

Figure 6 Component installation page

Figure 7 Uploading the installation package

Page 73: IMC Orchestrator Solution

70

Figure 7 Select and upload the matching installation package

Figure 8 Select components

Figure 9 Parameter configuration

Click Next.

Figure 10 Network configuration

Page 74: IMC Orchestrator Solution

71

Figure 11 Network binding

Select the newly created network.

Figure 12 Confirm parameters

After the parameters are confirmed, click Deploy to start the installation and deployment of components.

Create a vDHCP server Navigate to the Automation > Fabrics > Basic Services > DHCP page to create a DHCP server. The IP addresses are the primary and secondary DHCP IP address confirmed during the installation of vDHCP components. The default username and password of Netconf are both admin.

Enable TFTP and Syslog services Navigate to the Automation > Fabrics > Parameter Settings > Controller Global Settings page to enable TFTP and Syslog service and set the service IP address with cluster IP address, and then click Apply.

Page 75: IMC Orchestrator Solution

72

Figure 13 Enable TFTP and Syslog services.

Create a fabric Navigate to the Automation > Fabrics > Fabric Configuration page to add a fabric and fill in the corresponding AS number and enable EVPN.

Figure 14 Create a fabric

Create VLAN pools Navigate to the Automation > Resource Pools > VNID Pools > VLANs page to create VLAN pool as tenant bearer network.

Page 76: IMC Orchestrator Solution

73

Figure 15 Create VLAN pools

Create VXLAN pools Navigate to the Automation > Resource Pools > VNID Pools > VLANs page to create VXLAN pool.

Figure 16 Create VXLAN pools

Create address pools Navigate to the Automation > Resource Pools > IP Address Pools page to create the IP address pools of corresponding types as per the address planning. The management network IP pool, physical VTEP network IP pool and underlay interconnection network IP pool are required for automation deployment.

Create border device groups Navigate to the Automation > Resource Pools > Devices > Border Device Groups page to add device group for fabric. The Operation Mode is set to a service gateway group. the Connection Mode is VLAN from different segments or VLAN from the same segment. The border gateway and DC interconnection or fabric interconnection must be selected at Position. DRNI must be selected as the HA Deployment Mode. Address Pool and VLAN Pool are the default values, and Device Group Members are added after the border device is deployed. The Device Group Name, Remote Device Group, Position and Connection Mode cannot be modified after creation.

Page 77: IMC Orchestrator Solution

74

Figure 17 Create device groups

Create a VDS Click Automation > Public Network Settings > Virtual Distributed Switch menu, modify VDS1, add fabric to the bearer fabric, and add fabric connections.

Figure 18 Add fabric to the VDS

Create a device control protocol template Navigate to the Automation > Resource Pools > Auto Deployment > Control Protocol Templates page. Click Add to create a device control protocol template. You can also use the default device control protocol template. Since the template configuration does not support weak passwords, like admin. you need to modify the password of the default device control protocol template. (Password complexity requirements: ranging from 10 to 63 characters, containing at least two types of numbers, uppercase letters, lowercase letters and special characters. not supporting

Page 78: IMC Orchestrator Solution

75

Chinese nor question mark (?) and space, not allow to include the username and the reversed username. For example, Qwert@1234)

Figure 19 Add the device control protocol template

Access device automation deployment, supporting DRNI scenario Create an automation template

Figure 20 Select Fabric

Page 79: IMC Orchestrator Solution

76

Navigate to the Automation > Resource Pools > Auto Deployment > Automation Templates page, where the automation template information will be displayed. Click Add, and select a fabric in the pop-up dialog box and click Apply, to navigate to the device automation deployment page.

Figure 21 Basic configuration

On the Basic Device page, please select Underlay protocol and OSPF, IS-IS and BGP protocols are available. Click Apply in the upper-right corner to complete the configuration and turn to the address pool settings page after the selection is complete.

Figure 22 Configure address pools

On the IP Pool Settings page, you can configure DHCP Server, VTEP IP Pool, Management IP Pool, automation deployment IP pool and Underlay interconnection IP pool (supported only when Underlay protocol is BGP). The DHCP server is used to assign addresses to the devices when they are deployed. the VTEP IP pool is used to assign VTEP IP address to the devices. the management network IP pool is used to assign the management IP address required by the devices. the automated underlay IP pool is used to assign the IP address required by the devices when they are deployed, and the assigned IP address will be automatically released after the devices are finished deployment. The Underlay interconnection IP pool is used to assign an IP address to the Loopback 1 interface of the device, which is used for routing interconnection between the spine and leaf devices. When there is a default DHCP, the DHCP server will automatically bind with this default DHCP. The default DHCP can be configured on the Basic Network > Network > Basic Service page.

Page 80: IMC Orchestrator Solution

77

Figure 23 Set up templates

1. On the Device Configuration Template page, you need to configure the Basic Configuration of the template as well as Spine template, Aggregation template, Leaf template, etc. This time, we will configure the leaf template first for the time being. You can select the Software Version and Software Patch from the pull-down menu or upload them (optional). During the automation deployment, the device version will be upgraded to the one configured in the template. For now, only the device version in ipe format and software patch in bin format is supported.

2. The default value of Border IRF Stacking is No. if Border IRF Stacking is enabled, Border MAC is required.

3. Border MAC is the planned bridge MAC of border devices. Several MACs are allowed and are separated by comma (,).

4. The default value of IRF Stacking is Yes, and if the role is deployed in a stack, the physical IRF links must be connected first. If the role is not deployed in IRF stack, this value can be selected as No.

5. The default value of Enable Whitelist is Yes. When the value is Yes, the automation deployment is only available for devices on the Device List. If the value of Enable Whitelist is changed to No, all devices can be automatically deployed based on their configured roles.

6. OSPF Process ID is the OSPF process number (optional). If the number is not entered, the system default value is automatically filled with 1.

7. Distributed Relay is marked Yes, DRNI switch is on. 8. Link Aggregation Setup on DR Interfaces is selected as Automatic. if it is selected as

Manual, when the DR group is automatically created, the internal interface aggregation group and DR group must be manually created. When Automatic is selected, the DR interface will be automatically aggregated through LLDP detection after the device automation deployment.

9. DR Group ID Range must be filled in the format of x-y, and the value range is 1-1024. The value of the DR Group ID Range for automatic DR ID creation is set as per the rules.

10. In the Command Segment dialog box, customized commands can be filled in (optional). Please add the command lines as appropriate and ends with the hashtag (#).

11. Click Apply to complete the automation template creation after the configuration is finished.

Create a device List 1. Click the Basic Network > Resources > Automated Deployment menu and the Device List

label to turn to the device list page. Click Device Info to turn to the device information display page to perform the following operations:

2. Click Add to specify Device Serial Number, Device Role, Device Type, IRF Stacking Info, Management IP and other parameters in the pop-up dialog box and click Apply to complete the

Page 81: IMC Orchestrator Solution

78

operation of adding devices to the device list after configuration. When IS-IS is selected for underlay protocol, the network entity name and Loopback 0 interface IP address must be specified. When Distributed Relay is marked as Yes, and the conditions are met, the DR Group could be created automatically when the devices are automatically deployed. Information of the two leaf devices is added in turn. The Device Serial Number can be viewed by executing the display license device-id slot x command.

Device startup without a startup configuration file Empty the configuration information of the two devices to be automatically deployed and restart the devices for automation deployment.

Figure 24 Device startup without a startup configuration file

Automated deployment succeeded When the device is automatically deployed, the status is active.

Automatic creation of DR business After the device is active, the DRNI DR link must be automatically created based on the inquired LLDP information. Navigate to the Automation > Resource Pools > Distributed Relay (DR) Systems > DR Group ID Ranges page to view the automatically created DR group ID.

Figure 25 Automatically created DR group ID

Page 82: IMC Orchestrator Solution

79

Figure 26 Automatically created DR group

Figure 27 DR group details

Figure 28 DR group details

Page 83: IMC Orchestrator Solution

80

Figure 29 DR group details

Device configuration issuance This section describes only the key configuration. The configuration issued on Leaf 1 and Leaf 2 are similar and the configuration on Leaf 1 is introduced below.

[Leaf 1]: #

l2vpn enable

l2vpn statistics interval 30

l2vpn drni peer-link ac-match-rule vxlan-mapping

vxlan tunnel arp-learning disable

vxlan tunnel nd-learning disable

evpn drni group 31.1.0.16

evpn drni local 31.1.0.13 remote 31.1.0.12

evpn global-mac 88df-9e3c-8f41

#

interface Bridge-Aggregation256

description SDN_LAGG

port link-type trunk

port trunk permit vlan all

port trunk pvid vlan 4094

link-aggregation mode dynamic

port drni intra-portal-port 1

undo mac-address static source-check enable

#

interface Bridge-Aggregation257

description SDN_LAGG

port link-type trunk

port trunk permit vlan 1

link-aggregation mode dynamic

port drni group 50

vtep access port

#

interface LoopBack0

ip address 31.1.0.13 255.255.255.255

#

Page 84: IMC Orchestrator Solution

81

interface LoopBack2

ip address 31.1.0.16 255.255.255.255

ospf 1 area 0.0.0.0

#

interface M-GigabitEthernet0/0/0

ip binding vpn-instance mgmt

ip address 11.1.0.72 255.255.255.0

undo lldp enable

dhcp client identifier hex 0288df9e3c9483

#

interface Ten-GigabitEthernet1/1/5

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp compliance admin-status cdp txrx

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

undo mac-address static source-check enable

#

interface Ten-GigabitEthernet1/1/9

port link-mode route

ip address unnumbered interface LoopBack0

ospf network-type p2p

ospf 1 area 0.0.0.0

lldp compliance admin-status cdp txrx

lldp management-address arp-learning

lldp tlv-enable basic-tlv management-address-tlv interface LoopBack0

undo mac-address static source-check enable

#

interface Ten-GigabitEthernet1/2/6

port link-mode route

ip address 31.1.0.15 255.255.255.0

lldp compliance admin-status cdp txrx

#

interface Ten-GigabitEthernet1/1/7

port link-mode bridge

description SDN_LAGG

port link-type trunk

port trunk permit vlan 1

lldp compliance admin-status cdp txrx

vtep access port

port link-aggregation group 257

#

interface Ten-GigabitEthernet1/2/2

port link-mode bridge

description SDN_LAGG

port link-type trunk

Page 85: IMC Orchestrator Solution

82

port trunk permit vlan all

port trunk pvid vlan 4094

lldp compliance admin-status cdp txrx

port link-aggregation group 256

#

interface Ten-GigabitEthernet1/2/4

port link-mode bridge

description SDN_LAGG

port link-type trunk

port trunk permit vlan all

port trunk pvid vlan 4094

lldp compliance admin-status cdp txrx

port link-aggregation group 256

#

interface Ten-GigabitEthernet1/2/5

port link-mode bridge

description SDN_LAGG

port link-type trunk

port trunk permit vlan all

port trunk pvid vlan 4094

lldp compliance admin-status cdp txrx

port link-aggregation group 256

#

drni restore-delay 180

drni system-mac 88df-9e3c-9483

drni system-number 2

drni system-priority 10

drni mad default-action none

drni keepalive ip destination 31.1.0.14 source 31.1.0.15 vpn-instance auto-online-mlag

#

bgp 65535

non-stop-routing

router-id 31.1.0.12

peer 31.1.0.10 as-number 65535

peer 31.1.0.10 connect-interface LoopBack0

peer 31.1.0.11 as-number 65535

peer 31.1.0.11 connect-interface LoopBack0

#

address-family l2vpn evpn

peer 31.1.0.10 enable

peer 31.1.0.11 enable

Page 86: IMC Orchestrator Solution

83

Border device automation deployment, supporting DRNI scenario Create an automation template

Navigate to the Automation > Resource Pools > Auto Deployment > Automation Templates page to modify the created template and configure the spine template.

Figure 30 Create a spine template

Create a device List Click Basic Network > Resources > Auto Deployment > Device List > Device Info and click Add to create the device list. Information on the two border devices is added in turn.

Figure 31 Add the Device List 1

Page 87: IMC Orchestrator Solution

84

Figure 32 Add Device List 2

Navigate to the Basic Network > Resources > Auto Deployment > Device List > DR Info page. Click Add to create the DR info. The device interface corresponds to the uplink interface of the border device.

Device startup without a startup configuration file Empty the configuration information of the device and restart the device for automation deployment.

Page 88: IMC Orchestrator Solution

85

Figure 33 Device startup without a startup configuration file

Automated deployment succeeded When the device is automatically deployed, the status is still inactive, because the device is not in the border device group.

Add to a border device group Navigate to the Automation > Resource Pools > Devices > Border Device Group page. Click the created DRNI border device group, add the two spine devices for automation deployment as device group members, the spine devices are added to the border device group after the data synchronization status changes to active.

Figure 34 Device startup without a startup configuration file

Figure 35 After being added to the border device group, the data synchronization status of the spine devices are Active

Page 89: IMC Orchestrator Solution

86

Automatic creation of DR business After the device status is active, the DRNI DR link must be automatically created based on the inquired LLDP information and the DR link added by the user. Click Basic Network > Resource > DR > DR Group ID to view the automatically created DR group ID. Navigate to the Basic Network > Resource > DR > DR Group to view the automatically created DR group.

Figure 36 Automatically created DR group ID

Figure 37 Automatically created DR group

Figure 38 DR group details

Page 90: IMC Orchestrator Solution

87

Figure 39 DR group details

Device configuration issuance

spine-border-mlag.txt leaf-border-mlag.txt

When the configuration of border device for automation deployment supporting DRNI is compared with that of access device for automation deployment supporting DRNI, the differences mainly lie in 1) When the access device for automation deployment supports DRNI, the evpn drni local 31.1.0.13 remote 31.1.0.12 command will be issued. If this command is configured, it might cause message forwarding failure. 2) When the border device for automation deployment supports DRNI, the l2vpn evpn view of BGP will issue the nexthop evpn-drni group-address command. when the command is used to configure the release of the EVPN route, the next hop address is modified to the virtual ED address of the distributed aggregation, which will not be issued when the access device for automation deployment supports DRNI, and the rest of the configuration of the two is similar.

This section describes only the key configuration. The configuration issued on Border 1 and Border 2 are similar and the configuration on Border 1 is introduced below.

Border1 #

l2vpn enable

l2vpn drni peer-link ac-match-rule vxlan-mapping

vxlan tunnel arp-learning disable

vxlan tunnel nd-learning disable

evpn drni group 31.1.0.19

evpn global-mac f010-90ba-d89b

#

interface Bridge-Aggregation256

description SDN_LAGG

port link-type trunk

port trunk permit vlan all

port trunk pvid vlan 4094

link-aggregation mode dynamic

port drni intra-portal-port 1

undo mac-address static source-check enable

#

interface Bridge-Aggregation257

description SDN_LAGG

port link-type trunk

Page 91: IMC Orchestrator Solution

88

port trunk permit vlan 1

link-aggregation mode dynamic

port drni group 20

#

interface LoopBack0

ip address 31.1.0.10 255.255.255.255

#

interface LoopBack2

ip address 31.1.0.19 255.255.255.255

ospf 1 area 0.0.0.0

#

interface HundredGigE1/0/25

port link-mode route

ip address 31.1.0.18 255.255.255.0

#

interface Twenty-FiveGigE1/0/5

port link-mode bridge

description SDN_LAGG

port link-type trunk

port trunk permit vlan 1

speed 10000

port link-aggregation group 257

#

interface Twenty-FiveGigE1/0/9

port link-mode bridge

description SDN_LAGG

port link-type trunk

port trunk permit vlan all

port trunk pvid vlan 4094

speed 1000

port link-aggregation group 256

#

interface Twenty-FiveGigE1/0/11

port link-mode bridge

description SDN_LAGG

port link-type trunk

port trunk permit vlan all

port trunk pvid vlan 4094

speed 1000

port link-aggregation group 256

#

drni restore-delay 180

drni system-mac f010-90ba-ffe8

drni system-number 2

drni system-priority 10

drni mad default-action none

drni keepalive ip destination 31.1.0.17 source 31.1.0.18 vpn-instance auto-online-mlag

#

Page 92: IMC Orchestrator Solution

89

bgp 65535

non-stop-routing

router-id 31.1.0.10

group evpn internal

peer evpn connect-interface LoopBack0

peer 31.1.0.12 group evpn

peer 31.1.0.13 group evpn

#

address-family l2vpn evpn

undo policy vpn-target

nexthop evpn-drni group-address

peer evpn enable

peer evpn next-hop-local

peer evpn reflect-client

Page 93: IMC Orchestrator Solution

90

Restrictions SN Restrictions and guidelines Measures

1

Since the management interface does not have high reliability, if a DRNI member device management interface fails, the device will no longer be incorporated by VCF.

None.

2 For the DRNI network of 12900E product with a VXLAN tunnel IPL, when the downlink aggregation members are switching, the traffic will be cut off.

The problem will be solved in future updates.

3 Spine devices cannot perform automated defect replacement

Configure the faulty spine device into the access device.

4 When IRF devices are deployed, IMC Orchestrator will not proactively remote backup devices.

Manually remove the redundant devices.

5

In the single-homed scenario, the vxlan default-decapsulation source interface LoopBack0 command is required to be configured for DRNI network on border devices. otherwise, the border device is not connected to DRNI network through a single-homed virtual network.

For border devices that do not support the above command, manually configure the VSI.

6 For DRNI network with an Ethernet aggregate link IPL does not support layered interface binding and vlan-vxlan mapping binding to interfaces either.

None.

7 Leaf devices using DRNI network with a VXLAN tunnel IPL do not support access to the overlay computing nodes of the host.

Use the DRNI network with an Ethernet aggregate link IPL or IRF network to access the overlay computing nodes of the host.

8 • The drni mad include and drni mad

default-action none commands are supported on 12900E, 5944 and 5945 switches.

• For switches that do not support the include command, execute the exclude command to exclude the vsi interfaces.

9 IMC Orchestrator does not support the defect device replacement of spine devices.

Configure the faulty spine device into the access device.

10

5944/5945 devices do not support the netconf issuance of the vxlan default-decapsulation command. and when the automation deployment of DRNI underlay is in process, the issuance will fail and deviations might appear during reviews.

As a best practice, configure the DRNI on 5944/5945 devices manually.

Page 94: IMC Orchestrator Solution

91

Guidelines SN Guidelines

1 You need to specify the Router ID of OSPF and BGP manually. otherwise, the DR member devices might use the virtual IP address as Router ID, resulting in conflicts.

2 The L3VNI interface MAC in the DR system needs to be configured consistently. otherwise, it will not be interconnected by a single device forwarding. Command: evpn global-mac

3 If a DRNI member device has single-homed services on different VXLAN, the source MAC check must be removed from the physical interface configuration connecting the leaf device and the RR device.

4 As a best practice, do not configure DRCP timeout as short timeout during the restart of the DRNI process or before the ISSU upgrade. otherwise, the traffic might be interrupted.

5 Before removing the peer link from the network, configure the IPL Tunnel Layer 3 interface as drni mad exclude. If the interface is set to the MAD down status, the status cannot turn to UP even the drni mad exclude command is executed.

6 When there is a great deal of ARP on the device, the reset arp operation might lead to DRNI DOWN.

7 As a best practice, and to prevent the ARP synchronization failure, configure the ARP aging time to be consistent with the devices on both ends of the DRNI network.

8 Interconnection with leaf device's single-homed interface of the DRNI network, you need to configure the command: vxlan default-decapsulation source interface LoopBack0 on non-DRNI devices (such as border devices).

9

Since the 12900E device does not learn MAC through ARP messages, traffic transferred by single-homed tunnels will be treated as BUM messages and discarded, and configuring inbound MAC learning can prevent the above problem. Command: mac-address mac-learning ingress.

10 As a best practice, disable STP at DR interfaces.

11 If the primary device interface module failure causes IPL splitting, the whole machine will not process the service after the secondary device interface is in MAD down status. As a best practice, connect the uplink to multiple modules to ensure reliability.

12 • The drni system-priority values of DRNI member devices must be the same, and the drni

role priority values must be different, indicating the priority of the primary and secondary members.

13

• For IPP aggregate interfaces with an Ethernet aggregate link IPL, the undo mac-add static source-check enable command is required. for uplink Layer 3 interface with a VXLAN tunnel IPL (that is the physical interface through tunnel interfaces), the undo mac-add static source-check enable command is required.

14 • For the DRNI network without a peer link on leaf devices, the monitor-link is required.

when all the uplink interfaces on leaf devices are down, the downlink interfaces can be turned down manually to prevent VM from continuing sending messages to leaf devices.

15 • For DRNI network of leaf devices, if one device enables STP and the other device disables STP globally, the downlink aggregation will fail at the aggregate interface.

16 • For DRNI single-homed network with an Ethernet aggregate link IPL, the evpn drni local

192.0.0.3 remote 192.0.0.4 command is required, to facilitate the establishment of tunnels using physical addresses (Loopback 0).

17

• At 5944 DRNI network, IPP physical link turns down and up, the packet will be lost for about 20 seconds. Execute the drni mad exclude command at all interface VSI manually (important). Otherwise, when the peer link faults are cleared, there will be traffic cut off for tens of seconds.

18 • When spine devices are restarted, the leaf device flapping will occur on the DRNI network

Page 95: IMC Orchestrator Solution

92

SN Guidelines with a VXLAN tunnel IPL. Because: the leaf devices in the DRNI network are not configured with MAD exclude interfaces to exclude uplink interfaces and Loopback 0 interfaces.

19

For underlay automation deployment, when the following situations occur, the devices will restart twice. The first restart is conducted manually for automation deployment, and the second restart is to put the settings into effect. (a) Software version upgrade or patch upgrade. (b) Secondary devices will automatically restart when both devices performed IRF setup. (c) The ECMP of the device is modified, that is the ECMP set in the device configuration

template is not consistent with that of the device. (d) The mode of the device (switch-mode or hardware resource) is modified, that is the mode set in the device configuration template is not the same as that in the device.

20 When IS-IS is selected for underlay protocol, the Network Entity Name and Loopback0 Interface IP address are required for switches in the whitelist, or the device cannot be incorporated by IMC Orchestrator once it is automatically deployed.

21 The link aggregation between the device and the access server requires LLDP function from the physical NIC of the corresponding server.

22 IRF will be automatically configured during underlay automation deployment to increase the IRF reliability. as a best practice, configure 3 links between 2 IRF devices, because one of the links will act as BFD MAD detection link.

23

If an incorrect version is loaded for the device during the underlay automation deployment through the device configuration template that has its software version or software patch, the automation deployment will fail (due to mismatched versions) and the device cannot be incorporated by IMC Orchestrator.

24 • If all the IP addresses in the DHCP IP pool are assigned, when one of the devices goes

offline and the underlay is automatically deployed again, there will be a temporary vacancy to obtain an IP address, and it takes about one hour to resume normal.

25

• To perform underlay automation deployment, if you use IMC Orchestrator's vDHCP as the server to assign IP, you must ensure that there is only this one DHCP server in the environment. otherwise, it will result in failing to get the address after automation deployment, whether it is a single fabric environment or not. if it is not guaranteed, you can execute the dhcp relay command to specify the DHCP server.

26

When the device is incorporated by IMC Orchestrator after automation deployment, if another automation deployment is required, the device must be removed from the list in the device resource menu of IMC Orchestrator. otherwise, the device must end up in the unincorporated group due to address conflicts, and cannot be automatically replaced upon new automation deployment.

27 Unknown single messages received by the DRNI device from the physical tunnel are discarded. The timer aging of the MAC address on the single-homed DRNI device is 26 minutes, which is one minute longer than APP aging time. That is: mac-address timer aging 1560

28

In a network with an Ethernet aggregate link IPL, when all DRNI are accessed by single mount, the MAD DOWN command will not work. It means that when IPL turns down, the MAD included interfaces will not go down. if the DRNI network is composed of DR or a mixture of DR and single-homed devices, the MAD DOWN command will work.

29

As a best practice, configure router interfaces between spine and leaf devices. If the VLAN interface is used, specific configuration for 12900E products is needed. because in the case of default 12900E, the forwarded messages via a VLAN interface will not be tagged. The specific configuration is to: configure vxlan ip-forwarding vxlan tagged in system view. or tag PVID on the interconnection interface.

30 Virtual IP of DRNI is configured at Loopback 2 interface, since the underlay network configuration of IMC Orchestrator will use Loopback 1 interface for other purposes

31 For underlay automated issuance function, the devices of "border" type will be named as "border". when there are multiple border devices in one network, their names will be conflicted,

Page 96: IMC Orchestrator Solution

93

SN Guidelines and difficult to differentiate. DC options are as follows (one of the two options):

1. Before underlay automation deployment, as a best practice, configure the Device List to specify the names of border devices.

2. After underlay automation deployment, log in to ssh devices and the system view is shown below. Execute the system XXX command to modify the device names.