VXLAN BGP-EVPN with Cumulus + NXOS

In one of my previous blogs I outlined the basic configuration required for a simple VXLAN deployment between 2 Cisco Nexus 9k V switches. The overall aim of extending layer 2 across a layer 3 backbone was achieved, however as is the default behavior of VXLAN with no control plane mechanism – the solution still relied on a flood a learn behavior to propagate mac addresses across the fabric. This writeup aims to outline the additional constructs and configuration that is required to bring some pseudo-intelligence into the solution by leveraging the EVPN with MP-BGP to control the distribution of this traffic and increase the scalability of the solution.

I was also curious to get to grips with Cumulus Linux and test the extensibility of the BGP-EVPN solution between disperate vendors. The lab is built around 2 NXOSv 9k’s in the Spine & 2 NXOSv 9k’s along with a Cumulus VX in the leaf layer.

Figure A: Lab High level Overview

Lab Devices Outlined Below:

  • Cisco NX-OSv 9000 9300v 9.3.3 (Spine A & B, Leaf A & B)
  • Cumulus VX 4.3.0 (Leaf C)
  • Cisco IOU-L3 (For all end hosts)

Device Loopbacks:

  • Spine A – Loopback 0 (2.2.2.1/32) and Loopback 12 (12.12.12.12/32 – Anycast RP)
  • Spine B – Loopback 0 (2.2.2.2/32) and Loopback 12 (12.12.12.12/32 – Anycast RP)
  • Leaf A – Loopback 0 (1.1.1.1/32)
  • Leaf B – Loopback 0 (1.1.1.2/32)
  • Leaf C – Loopback 0 (1.1.1.3/32)

The lab will provide multi-tenancy across the fabric, with each customer having 2 segments within their respected tenancy. Full symmetrical inter-VNI routing will be performed for each customer in their respected tenancy.

See below for the high level breakdown of the customer segmentation and VNI allocation

  1. Tennant/Customer 10
    • VNI90010 – (192.168.10.0/24 – mcast 239.0.0.10)
    • VNI90020 – (192.168.20.0/24 – mcast 239.0.0.20)
    • VNI10010 – Layer-3 VNI for Symmetric Inter-VNI routing
  2. Tennant/Customer 20
    • VNI92010 – (172.16.10.0/24 – mcast 239.0.20.10)
    • VNI92020 – (172.16.20.0/24 mcast 239.0.20.20)
    • VNI20010- Layer-3 VNI for Symmetric Inter-VNI routing

Technical Overview

Physical

The physical topology will leverage the CLOS / Spine and leaf architecture that has become synonymous with these technologies – predictable round-trip time along with scope to leverage full ECMP whilst forwarding across the fabric are the two big driving factors behind this physical topology.

OSPF

OSPF will be deployed as a single area backbone to provide end to end reachability across the underlay between VTEPs (Leafs). Nothing particularly spectacular about the OSPF configuration, just ensure all loopbacks and transit networks are advertised to OSPF.

On the Spine nodes I did ensure that the OSPF RID was manually set to Lo0 as opposed to the default behaviour of the anycast RP Lo12 address – eliminating any possible duplicate RID issues.

NXOS – Leaf A – OSPF Configuration/Validation
Cumulus – Leaf C – OSPF Configuration/Validation

All leafs have OSPF peerings with all spines as expected.

Multicast

My original choice was to configure bidirectional PIM as all VTEPs will be multicast senders and receivers, However, during the configuration I did find out that Cumulus does not support PIM BiDir (please see) so opted for ASM PIM Sparse.

Both spines will be participating in an anycast RP group of 12.12.12.12 so no need to manually configure MSDP between spine switches, this syncronisation of sources is taken care of by the NXOS anycast rp group feature.

All spines and leafs will be configured with a manual RP of the anycast address of 12.12.12.12/32.

All Transit and Loopback interfaces will be configured with PIM Sparse.

NXOS – Spine A – Multicast Configuration
Cumulus – Leaf C – Multicast Configuration

MP-BGP

The BGP topology consists of single autonomous system (ASN 65000) with both spine nodes performing the function of BGP route reflectors to eliminate the requirement for a full-mesh iBGP peering across the fabric.

The l2vpn evpn family has been configured along with the distribution of extended communities.

NXOS-Spine-A-MP-BGP
NXOS-Leaf-A-MP-BGP
Cumulus-Leaf-C-MP-BGP

VXLAN/EVPN

The local host-side encapsulation is VLAN based, these vlans are mapped to specific VXLAN segments that are again mapped to a specific multicast group (as detailed above)

L3 NVI has been configured to allow symmetric inter-vni routing – this removes the requirement for all vni segments to be present on all VTEPs

The VLAN segments have been tied to their specific VNI’s

NXOS – Leaf A – VLAN/VNI

The NVE interface performs the vxlan excap/decap on NXOS – we also have enabled control plane learning via BGP over the NVE interface with the “host-reachability protocol bgp” command. Each VNI is allocated its dedicated multicast group.

NXOS – Leaf A – NVE Configuration

The SVI’s have been placed into their appropriate VRF’s depending on the respected tenancy – the anycast gateway feature has been configured to enable the same L2 address at each SVI on each VTEP – this is especially useful for Virtual Machine mobility within the fabric.

The L3 VNI’s have been configured with the ip forward command, this enables routing functionality but without the need for an ip-address.

NXOS – Leaf A – SVI’s

The Cumulus configuration isn’t a million miles away from the NXOS configuration in terms of syntax for vxlan – In all honesty I actually prefer it.

Cumulus – Leaf C – VXLAN

Cumulus and NXOS Generate their RD’s in the same manner, so no requirement to deviate from the automatic route import/export schema. As with the VXLAN configuration, this is only required to be configured on the VTEP’s the spines are essentially acting as multicast/mp-bgp/ospf forwarders.

NXOS – Leaf A – EVPN
Cumulus – Leaf C – EVPN

Verification

We can see the EVPN type 2 mac/ip routes by issuing show bgp l2vpn evpn on Leaf-A – We see the intra-vni routes advertised with no ip address and the inter-vni routes advertised with an ip address.

NXOS – Leaf A – EVPN Routes

The Cumulus show command is almost identical to the cisco NXOS equivalent – makes troubleshooting nice and easy between platforms – net show bgp evpn route

Configs

https://github.com/thecraigus/nxoscumulusevpn

Leave a comment