Académique Documents
Professionnel Documents
Culture Documents
Contents
Contents ..................................................................................................................................................................................................... 2
OpenStack with Cumulus Linux ............................................................................................................................................................... 4
Objective................................................................................................................................................................................................. 4
Enabling Choice of Hardware in the Data Center ............................................................................................................................. 4
Combined Solution Using OpenStack and Cumulus Linux .............................................................................................................. 4
Driving Towards Operational Efficiencies .......................................................................................................................................... 5
Intended Audience for Network Design and Build ............................................................................................................................ 6
OpenStack Network Architecture in a PoC or Small Test/Dev Environment ..................................................................................... 6
Network Architecture and Design Considerations ............................................................................................................................ 6
OpenStack Network Architecture in a Cloud Data Center ................................................................................................................... 8
Network Architecture ............................................................................................................................................................................ 8
Scaling Out ............................................................................................................................................................................................. 9
Out-of-Band Management ......................................................................................................................................................................10
Building an OpenStack Cloud with Cumulus Linux .............................................................................................................................11
Minimum Hardware Requirements ..................................................................................................................................................11
Network Assumptions and Numbering ............................................................................................................................................12
Build Steps -- Automated....................................................................................................................................................................15
Behind the Scenes of the Automated Build Out ........................................................................................................................ 16
Build Steps -- Manual .........................................................................................................................................................................17
1. Set Up Physical Network ........................................................................................................................................................... 18
2. Basic Physical Network Configuration .................................................................................................................................... 18
3. Verify Connectivity ..................................................................................................................................................................... 21
4. Set Up Physical Servers ............................................................................................................................................................ 21
5. Configure Spine Switches ......................................................................................................................................................... 22
6. Configure Each Pair of Leaf Switches ..................................................................................................................................... 24
7. Configure the OpenStack Controller ....................................................................................................................................... 27
8. Configure Each Compute Node ............................................................................................................................................... 30
9. Create Tenant Networks ........................................................................................................................................................... 33
10. Start VMs Using the OpenStack Horizon Web UI ................................................................................................................ 33
Conclusion ................................................................................................................................................................................................34
Summary ..............................................................................................................................................................................................34
References ...........................................................................................................................................................................................34
Appendix A: Example /etc/network/interfaces Configurations ........................................................................................................36
CONTENTS
leaf01 ...................................................................................................................................................................................................36
leaf02 ...................................................................................................................................................................................................39
leaf03 ...................................................................................................................................................................................................42
leaf04 ...................................................................................................................................................................................................45
spine01 ................................................................................................................................................................................................48
spine02 ................................................................................................................................................................................................50
Appendix B: Network Setup Checklist ..................................................................................................................................................52
Version 1.0.3
June 26, 2015
2015 Cumulus Networks. CUMULUS, the Cumulus Logo, CUMULUS NETWORKS, and the Rocket Turtle Logo (the Marks) are trademarks and service marks
of Cumulus Networks, Inc. in the U.S. and other countries. You are not permitted to use the Marks without the prior written consent of Cumulus Networks. The
registered trademark Linux is used pursuant to a sublicense from LMI, the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide basis. All
other marks are used under fair use or license from their respective owners.
The OpenStack Word Mark and OpenStack Logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundaiton,
in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the
OpenStack Foundation, or the OpenStack community.
www.cumulusnetworks.com
Cumulus Linux complements OpenStack by delivering the same automated, self-service operational model to the network.
And since the underlying operating system is the same on the OpenStack nodes and the switches, the same automation,
monitoring and management tools can be used, greatly simplifying provisioning and operations.
Cumulus Linux offers powerful automation capabilities, by way of technologies such as ONIE, zero touch provisioning, PXE
and Puppet. The combination of bare metal hardware with a consistent Linux platform enables you to leverage automation
to deploy servers and networks together. Thus, you can use a unified set of tools to automate the installation and
configuration of both switches and servers. You can use a common automation framework that uses a simple config file to
install and configure an entire pod of switches and call OpenStack to install and configure the servers, all without any
human intervention.
www.cumulusnetworks.com
www.cumulusnetworks.com
Scaling Out
Scaling out the architecture involves adding more hosts to the access switch pairs, and then adding more access switches
in pairs as needed, as shown in Figure 5.
www.cumulusnetworks.com
Out-of-Band Management
An important supplement to the high capacity production data network is the management network used to administer
infrastructure elements, such as network switches, physical servers, and storage systems. The architecture of these
networks vary considerably based on their intended use, the elements themselves, and access isolation requirements.
This solution guide assumes that a single Layer 2 domain is used to administer the network switches and management
interfaces on the controller and hypervisor hosts. These operations include imaging the elements, configuring them, and
monitoring the running system. This network is expected to host both DHCP and HTTP servers, such as isc-dhcp and
apache2, as well as provide DNS reverse and forward resolution. In general, these networks provide some means to
connect to the corporate network, typically a connection through a router or jump host.
Figure 7 below shows the logical and, where possible, the physical connections of each element as well as the services
required to realize this deployment.
10
Note that this design may be scaled up to 1535 hypervisor nodes. If required, additional OpenStack clusters may be
configured and connected to the core/external routers. OpenStack scalability limits will be hit before full scale is achieved.
www.cumulusnetworks.com
11
Note that the peer bonds for MLAG support are always the last two interfaces on each switch. For spines, they are swp31
and swp32. For leafs, they are swp51 and swp52. The next to last two interfaces on each leaf are for the uplinks to
spine01 and spine02.
Also note that the same subnet is used for every MLAG peer pair. This is safe because the addresses are only used on the
link between the pairs. Routing protocols will not distribute these routes because they are part of the link-local
169.254.0.0/16 subnet.
The details for the switches, hosts, and logical interfaces are as follows:
leaf01
connected to
12
Logical Interface
Description
Physical Interfaces
leaf02
peerlink
swp51, swp52
leaf02
peerlink.4094
N/A
spine01, spine02
uplink
swp49, swp50
external router
N/A
swp48
leaf01
connected to
Logical Interface
Description
Physical Interfaces
multiple hosts
access ports
connect to hosts
controller01
host01
swp1
compute01
host02
swp2
out-of-band
management
N/A
eth0
leaf02
connected to
Logical Interface
Description
Physical Interfaces
leaf01
peerlink
swp51, swp52
leaf01
peerlink.4094
N/A
spine01, spine02
uplink
swp49, swp50
external router
N/A
swp48
multiple hosts
access ports
connect to hosts
controller01
host01
swp1
compute01
host02
swp2
out-of-band
management
N/A
eth0
leaf0N
connected to
Logical Interface
Description
Physical Interfaces
Repeat above configurations for each additional pair of leafs, minus the external router interfaces.
spine01
connected to
Logical Interface
Description
Physical Interfaces
spine02
peerlink
swp31, swp32
spine02
peerlink.4094
N/A
multiple leafs
leaf ports
www.cumulusnetworks.com
13
spine01
connected to
Logical Interface
Description
Physical Interfaces
leaf01, leaf02
downlink1
swp1, swp2
leaf03, leaf04
downlink2
swp3, swp4
out-of-band
management
N/A
eth0
Description
Physical Interfaces
spine02
connected to
Logical Interface
spine01
peerlink
swp31, swp32
spine01
peerlink.4094
N/A
multiple leafs
leaf ports
leaf01, leaf02
downlink1
swp1, swp2
leaf03, leaf04
downlink2
swp3, swp4
out-of-band
management
N/A
eth0
Both the manual and automated installation processes detailed below have some fixed parameters for things like VLAN
ranges and IP addresses. These can be changed in the config.txt file that you will copy onto a USB stick you will use for
automation purposes. If youre following the manual process and want to use different parameters, be careful to modify the
numbers in the configuration to match.
The parameters you are most likely to need to change are the external subnet and default route. Get this information from
whoever configured your access to the outside world (either the Internet or the rest of the data center network).
14
Parameter
Default Setting
200-2000
10.10.TENANT#.0/24
101
192.168.100.0/24
192.168.100.1
External IP of controller
192.168.100.2
192.168.100.3
Parameter
Default Setting
102
10.254.192.0/20
10.254.192.1
10.254.192.2
192.168.0.0/24
4094
169.254.255.0/30
44:38:39:ff:00:01
Tasks
Set up physical network.
2.
3.
4.
www.cumulusnetworks.com
15
Step
Tasks
5.
6.
Power on racks.
7.
8.
16
Once the switches are identified, MLAG is configured, the bridge is created, and the VLAN range is provisioned on every
switch.
Phase 2 (Install and configure the servers using PXE and Puppet)
Now that the network is configured, spine01 offers a PXE and Puppet environment to install an OS on the servers. This
environment uses the same DHCP server, a TFTP server for fetching a network boot loader, and unattended preseed file to
install the OS. When the OS installation process completes, a Puppet agent is configured to receive additional post OS
install steps from the spine01 Puppet master. Puppet handles the OpenStack installation and configuration on the servers.
Tasks
2.
Name switches.
Bring up out of band management ports.
Bring up front panel ports.
3.
Verify connectivity.
4.
Network Topology
5.
6.
OpenStack
7.
8.
9.
www.cumulusnetworks.com
17
Step
10.
Tasks
Start VMs using the OpenStack Horizon Web UI.
spine01 cumulus
18
www.cumulusnetworks.com
19
Once all configurations have been defined in the /etc/network/interfaces file, apply the configurations to ensure they
are loaded into the kernel. There are several methods for applying configuration changes depending on when and what
changes you want to apply:
Command
Action
sudo ifreload -a
Parse interfaces labeled with auto that have been added to or modified in the
configuration file, and apply changes accordingly.
Note: This command is disruptive to traffic only on interfaces that have been
modified.
Restart all interfaces labeled with auto as defined in the configuration file,
regardless of what has or has not been recently modified.
Note: This command is disruptive to all traffic on the switch including the eth0
management network.
For example, on leaf01, to apply the new configuration to all changed interfaces labeled with auto:
cumulus@leaf01:~$ sudo ifreload -a
or individually:
cumulus@leaf01:~$ sudo ifup swp1
cumulus@leaf01:~$ sudo ifup swp2
.
.
.
cumulus@leaf01:~$ sudo ifup swp52
The above configuration in the /etc/network/interfaces file is persistent, which means the configuration applies even
after you reboot the switch. Another option to test network connectivity is to run a shell loop to bring up each front-panel
interface temporarily (until the next reboot), so that LLDP traffic can flow. This lets you verify the wiring is done correctly in
the next step:
cumulus@spine01$ for i in `grep '^swp' /var/lib/cumulus/porttab | cut -f1`; do
sudo ip link set dev $i up; done
Repeat the above steps on each of spine02, leaf01, leaf02, leaf03, and leaf04, changing the hostname
appropriately in each command or file.
20
3. Verify Connectivity
Back on spine01, use LLDP to verify that the cabling is correct, according to the cabling diagram:
cumulus@spine01$ sudo lldpctl | less
snip
------------------------------------------------------------------------------Interface:
swp31, via: LLDP, RID: 4, Time: 0 day, 00:12:10
Chassis:
ChassisID:
mac 44:38:39:00:49:0a
SysName:
spine02
SysDescr:
Cumulus Linux
Capability:
Bridge, off
Capability:
Router, on
Port:
PortID:
ifname swp31
PortDescr:
swp31
------------------------------------------------------------------------------Interface:
swp32, via: LLDP, RID: 4, Time: 0 day, 00:12:10
Chassis:
ChassisID:
mac 44:38:39:00:49:0a
SysName:
spine02
SysDescr:
Cumulus Linux
Capability:
Bridge, off
Capability:
Router, on
Port:
PortID:
ifname swp32
PortDescr:
swp32
------------------------------------------------------------------------------The output above shows only the last 2 interfaces, which you can see are correctly connected to the other spine switch,
based on the SysName field being spine02 (shown in green above). Verify that the remote-side interfaces are correct per
the wiring diagram, using the PortID field.
Note: Type q to quit less when you are done verifying.
Repeat the lldpctl command on spine02 to verify the rest of the connectivity.
www.cumulusnetworks.com
21
22
On spine02, add a VLAN for the MLAG peering communications. Note the change of the last octet in the address and
clagd-peer-ip lines:
#VLAN for the MLAG control traffic.
auto peerlink.4094
iface peerlink.4094
address 169.254.255.2/30
clagd-enable yes
clagd-peer-ip 169.254.255.1
clagd-backup-ip 192.168.0.94/24
clagd-sys-mac 44:38:39:ff:00:00
On both spine switches, bring up the peering interfaces. The --with-depends option tells ifup to bring up the peer first,
since peerlink.4094 depends on it:
cumulus@spine0N:~$ sudo ifup --with-depends peerlink.4094
On spine01, verify that you can ping spine02:
cumulus@spine01$ ping -c 3 169.254.255.2
PING 169.254.255.2 (169.254.255.2) 56(84) bytes of data.
64 bytes from 169.254.255.2: icmp_req=1 ttl=64 time=0.716 ms
64 bytes from 169.254.255.2: icmp_req=2 ttl=64 time=0.681 ms
64 bytes from 169.254.255.2: icmp_req=3 ttl=64 time=0.588 ms
--- 169.254.255.2 ping statistics --3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.588/0.661/0.716/0.061 ms
Now on both spine switches, verify that the peers are connected:
cumulus@spine01:~$ clagctl
The peer is alive
Peer Priority, ID, and Role:
Our Priority, ID, and Role:
Peer Interface and IP:
Backup IP:
System MAC:
The MAC addresses in the output will be different depending on the MACs issued to your hardware.
Now that the spines are peered, create the bonds for the connections to the leaf switches. On both spine switches, edit
/etc/network/interfaces and add the following at the end:
#Bonds down to the pairs of leafs.
auto downlink1
allow-leafs downlink1
iface downlink1
bond-slaves swp1 swp2
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
www.cumulusnetworks.com
23
mtu 9000
clag-id 1
auto downlink2
allow-leafs downlink2
iface downlink2
bond-slaves swp3 swp4
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
mtu 9000
clag-id 2
You can add more stanzas for more pairs of leaf switches as needed, modifying the sections in green above. For example,
to add a third stanza, youd use downlink3; the corresponding swp interfaces would be swp5 and swp6 and clag-id 3.
Bridge together the MLAG peer bond and all the leaf bonds. On both switches, edit /etc/network/interfaces and add
the following at the end:
#Bridge that connects our peer and downlinks to the leafs.
auto bridge
iface bridge
bridge-vlan-aware yes
bridge-ports peerlink downlink1 downlink2
bridge-stp on
bridge-vids 100-2000
mstpctl-treeprio 12288
If you added more downlink# interfaces in the previous step, add them to the bridge-ports line, at the end of the line.
If youre familiar with the traditional Linux bridge mode, you may be surprised that we called the bridge bridge instead of
br0. The reason is that were using the new VLAN-aware Linux bridge mode in this example, which doesnt require multiple
bridge interfaces for common configurations. It trades off some of the flexibility of the traditional mode in return for
supporting very large numbers of VLANs. See the Cumulus Linux Documentation for more information on the two bridging
modes supported in Cumulus Linux.
Finally, on both spine01 and spine02, bring up all the interfaces, bonds and bridges. The --with-depends option tells
ifup to bring up any down interfaces that are needed by the bridge:
cumulus@spine0N:~$ sudo ifup --with-depends bridge
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
mtu 9000
On odd numbered leaf switches, add a VLAN for the MLAG peering communications. Note that the last octet of the clagdsys-mac must be the same for each switch in a pair, but incremented for subsequent pairs. For example, leaf03 and
leaf04 should have 03 as the last octet:
#VLAN for the MLAG control traffic.
auto peerlink.4094
iface peerlink.4094
address 169.254.255.1/30
clagd-enable yes
clagd-peer-ip 169.254.255.2
clagd-backup-ip 192.168.0.91/24
clagd-sys-mac 44:38:39:ff:00:02
On even numbered leaf switches, add a VLAN for the MLAG peering communications. Note the change of the last octet in
the address and clagd-sys-peer-ip lines. Also note that for subsequent pairs of switches, the last octet of clagdsys-mac must match as described for the odd-numbered switches:
#VLAN for the MLAG control traffic.
auto peerlink.4094
iface peerlink.4094
address 169.254.255.2/30
clagd-enable yes
clagd-peer-ip 169.254.255.1
clagd-backup-ip 192.168.0.90/24
clagd-sys-mac 44:38:39:ff:00:02
On each leaf switch, bring up the peering interfaces:
cumulus@leaf0N:~$ sudo ifup --with-depends peerlink.4094
On each odd numbered leaf switch, verify that you can ping its corresponding even-numbered leaf switch:
cumulus@leaf0N:~$ ping -c 3 169.254.255.2
PING 169.254.255.2 (169.254.255.2) 56(84) bytes of data.
64 bytes from 169.254.255.2: icmp_req=1 ttl=64 time=0.716 ms
64 bytes from 169.254.255.2: icmp_req=2 ttl=64 time=0.681 ms
64 bytes from 169.254.255.2: icmp_req=3 ttl=64 time=0.588 ms
--- 169.254.255.2 ping statistics --3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.588/0.661/0.716/0.061 ms
Now, on each leaf switch, verify that the peers are connected:
cumulus@leaf0N:~$ clagctl
The peer is alive
Peer Priority, ID, and Role: 32768 6c:64:1a:00:39:5a primary
Our Priority, ID, and Role: 32768 6c:64:1a:00:39:9b secondary
Peer Interface and IP: peerlink.4094 169.254.255.2
www.cumulusnetworks.com
25
bond-lacp-bypass-allow 1
mstpctl-portadminedge yes
mstpctl-bpduguard yes
clag-id 1
Repeat the above stanza for each front panel port that has servers attached. Youll need to adjust host01, swp1 and the
value for clag-id everywhere they appear (in green). For example, for swp2, change each host01 to host02 and swp1 to
swp2, and change clag-id from 1 to 2.
Bridge together the MLAG peer bond, the uplink bond, and all the leaf bonds. On each leaf switch, edit
/etc/network/interfaces and add the following at the end:
#Bridge that connects our peer, uplink to the spines, and the hosts.
auto bridge
iface bridge
bridge-vlan-aware yes
bridge-ports uplink swp48 peerlink host01 host02 host03
bridge-stp on
bridge-vids 100-2000
mstpctl-treeprio 16384
If you added more host# interfaces in the previous step, add them to the bridge-ports line, at the end of the line. Note
that swp48 (in green above) should only be present on leaf01 and leaf02, not on subsequent leafs.
Finally, on each leaf switch, bring up all the interfaces, bonds and bridges:
cumulus@leaf0N:~$ sudo ifup --with-depends bridge
27
28
Install the message broker (RabbitMQ) using the directions in the OpenStack message queue install guide. Note that youll
have to use sudo when installing the packages. Make sure to remember the rabbitmq password you chose, as you will
need it later.
Install the Keystone authentication service using the directions in the OpenStack keystone install guide. Note that youll
have to use sudo with the commands in that guide. Also note that creating the database tables can take a couple of
minutes. Make sure to remember the admin_token you generated using openssl for later.
Create some users, roles, tenants, services and endpoints using the directions in the OpenStack users install guide and
services install guide. Dont use sudo for these commands. For a simple test deployment, we recommend admin/adminpw
and demo/demopw for the usernames and passwords for the admin user and demo user.
Create an OpenStack RC file to set the various environment variables needed to run OpenStack commands. This simplifies
running commands as various OpenStack users; just source the rc file any time you want to change users. The directions
are in the Openstack openrc install guide. Dont use sudo for these commands. If you used admin/adminpw for your admin
user, replace ADMIN_PASS with adminpw.
Verify that Keystone is operating properly using the directions in the OpenStack Keystone verification install guide. Dont
use sudo for these commands. You dont need to recreate the admin-openrc.sh file you created previously in the
openrc install guide.
Install the Glance image storage service using the directions in the OpenStack glance install guide. Note that command
prompts in that guide that end with a # symbol must be run with sudo, while command prompts that end with the $ symbol
do not.
Import a demo Linux image into the Glance inventory, so that you can have an OS to start in VMs to demonstrate
OpenStack. This first command assumes your server has direct access to the Internet.
cumulus@controller01$ wget http://cdn.download.cirros-cloud.net/0.3.2/cirros0.3.2-x86_64-disk.img
If you need an HTTP proxy to access the Internet from your environment, you can modify the previous command:
cumulus@controller01$ http_proxy="http://MY.HTTP.PROXY/" wget
http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
Install the downloaded Linux image into Glance:
cumulus@controller01$ glance image-create --name="Cirros" --disk-format=qcow2 -container-format=bare --is-public=true --file cirros-0.3.2-x86_64-disk.img
Install the Nova compute controller service using the directions in the OpenStack Nova install guide and the Nova network
install guide. Note that command prompts in that guide that end with a # symbol must be run with sudo, while command
prompts that end with a $ symbol do not. Youll need to replace the example 10.0.0.11 IP addresses in the my_ip,
vncserver_listen and vncserver_proxyclient_address fields with 10.254.192.1.
Install the Horizon Web dashboard packages, then remove the openstack-dashboard-ubuntu-theme package, as it
may cause rendering issues:
cumulus@controller01$ sudo apt-get install apache2 memcached libapache2-mod-wsgi
openstack-dashboard
cumulus@controller01$ sudo apt-get remove --purge openstack-dashboard-ubuntutheme
It is not a good idea to expose the Horizon Web interface to untrusted networks without hardening the configuration.
www.cumulusnetworks.com
29
30
auto bond0.102
iface bond0.102 inet static
address 10.254.192.2
netmask 255.255.240.0
#External network access VLAN.
auto bond0.101
iface bond0.101 inet static
address 192.168.100.3
netmask 255.255.240.0
gateway 192.168.100.1
Youll need to increment the API VLANs IP address (show in green above, on bond0.102) for each compute node. Youll
also need to increment the external VLANs IP address (show in green above, on bond0.101). The examples given above
are for compute01. For compute02, you would use 10.254.192.3 and 192.168.100.4.
Note: Ubuntu uses ifupdown, while Cumulus Linux uses ifupdown2. The configuration format is similar, but many
advanced configurations that work on the switch will not work in Ubuntu.
Now bring up the interfaces:
cumulus@compute0N:~$ sudo ifup -a
Verify that the VLAN interface is UP and LOWER_UP:
cumulus@compute0N:~$ sudo ip link show bond0.102
9: bond0.102@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP mode DEFAULT group default
link/ether 90:e2:ba:7c:28:28 brd ff:ff:ff:ff:ff:ff
Add a hostname alias for the controller. Edit /etc/hosts and add the following at the end:
10.254.192.1 controller
Verify that this node can talk to the controller over the API VLAN:
cumulus@compute0N:~$ ping -c 3 controller
PING controller (10.254.192.1) 56(84) bytes of data.
64 bytes from controller (10.254.192.1): icmp_seq=1 ttl=64 time=0.229 ms
64 bytes from controller (10.254.192.1): icmp_seq=2 ttl=64 time=0.243 ms
64 bytes from controller (10.254.192.1): icmp_seq=3 ttl=64 time=0.220 ms
--- controller ping statistics --3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.220/0.230/0.243/0.019 ms
Install the Nova compute node service using the directions in the OpenStack Nova compute install guide. Note that
command prompts in that guide that end with a # symbol must be run with sudo, while command prompts that end with a
$ symbol do not. If the step in the directions that checks for hardware acceleration for virtual machines reports that your
server does not have it, you can disable this in the BIOS. Note that youll need to replace the example 10.0.0.31 IP address
in the my_ip and vncserver_proxyclient_address fields with this nodes API IP address. For compute01, that would
be 10.254.192.2.
www.cumulusnetworks.com
31
32
www.cumulusnetworks.com
33
Conclusion
Summary
The fundamental abstraction of hardware from software and providing customers a choice through a hardware agnostic
approach is core to the philosophy of Cumulus Networks and fits very well within the software-centric, commodity hardware
friendly design of OpenStack.
Just as OpenStack users have choice in server compute and storage, they can tap the power of Open Networking and
select from a broad range of switch providers running Cumulus Linux.
Choice and CapEx savings are only the beginning. OpEx savings come from agility through automation. Just as OpenStack
orchestrates the cloud by enabling the automated provisioning of hosts, virtual networks, and VMs through the use of APIs
and interfaces, Cumulus Linux enables network and data center architects to leverage automated provisioning tools and
templates to define and provision physical networks.
References
Article/Document
URL
OpenStack Documentation
http://docs.openstack.org/icehouse/installguide/install/apt/content/index.html
34
http://docs.cumulusnetworks.com
CONCLUSION
Article/Document
URL
https://support.cumulusnetworks.com/hc/enus/articles/202868023
https://support.cumulusnetworks.com/hc/enus/sections/200398866
https://support.cumulusnetworks.com/hc/enus/articles/203570088
https://support.cumulusnetworks.com/hc/enus/articles/201787586
https://support.cumulusnetworks.com/hc/enus/articles/203748326
Software Pricing
Hardware Compatibility List
http://cumulusnetworks.com/product/pricing/
http://cumulusnetworks.com/support/linux-hardwarecompatibility-list/
http://cumulusnetworks.com/downloads/
http://repo.cumulusnetworks.com
https://github.com/CumulusNetworks/
www.cumulusnetworks.com
35
36
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
mtu 9000
#VLAN for the MLAG control traffic.
auto peerlink.4094
iface peerlink.4094
address 169.254.255.1/30
clagd-peer-ip 169.254.255.2
clagd-backup-ip 192.168.0.91/24
clagd-sys-mac 44:38:39:ff:00:02
#Bond up to the spines.
auto uplink
iface uplink
bond-slaves swp49 swp50
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
mtu 9000
clag-id 1000
#Bonds down to the host. Only one swp, because the other swp is on the peer
switch.
auto host01
allow-hosts host01
iface host01
bond-slaves swp1
bond-mode 802.3ad
bond-miimon 100
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
bond-lacp-bypass-allow 1
mstpctl-portadminedge yes
mstpctl-bpduguard yes
clag-id 1
auto host02
allow-hosts host02
iface host02
bond-slaves swp2
bond-mode 802.3ad
bond-miimon 100
bond-lacp-rate 1
www.cumulusnetworks.com
37
bond-min-links 1
bond-xmit-hash-policy layer3+4
bond-lacp-bypass-allow 1
mstpctl-portadminedge yes
mstpctl-bpduguard yes
clag-id 2
auto host03
allow-hosts host03
iface host03
bond-slaves swp3
bond-mode 802.3ad
bond-miimon 100
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
bond-lacp-bypass-allow 1
mstpctl-portadminedge yes
mstpctl-bpduguard yes
clag-id 3
#Bridge that connects our peer, uplink to the spines, and the hosts.
auto bridge
iface bridge
bridge-vlan-aware yes
bridge-ports uplinks swp48 peerlink host01 host02 host03
bridge-stp on
bridge-vids 100-2000
mstpctl-treeprio 16384
38
leaf02
cumulus@leaf02$ cat /etc/network/interfaces
auto eth0
iface eth0
address 192.168.0.91/24
gateway 192.168.0.254
# physical interface configuration
auto swp1
iface swp1
mtu 9000
auto swp2
iface swp2
mtu 9000
auto swp3
iface swp3
mtu 9000
.
.
auto swp48
iface swp48
bridge-access 101
mtu 9000
.
.
auto swp52
iface swp52
mtu 9000
# peerlink bond for clag
#Bond for the peer link. MLAG control traffic and data when links are down.
auto peerlink
iface peerlink
bond-slaves swp51 swp52
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
mtu 9000
#VLAN for the MLAG control traffic.
auto peerlink.4094
www.cumulusnetworks.com
39
iface peerlink.4094
address 169.254.255.2/30
clagd-peer-ip 169.254.255.1
clagd-backup-ip 192.168.0.90/24
clagd-sys-mac 44:38:39:ff:00:02
#Bond up to the spines.
auto uplink
iface uplink
bond-slaves swp49 swp50
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
mtu 9000
clag-id 1000
#Bonds down to the host. Only one swp, because the other swp is on the peer
switch.
auto host01
allow-hosts host01
iface host01
bond-slaves swp1
bond-mode 802.3ad
bond-miimon 100
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
bond-lacp-bypass-allow 1
mstpctl-portadminedge yes
mstpctl-bpduguard yes
clag-id 1
auto host02
allow-hosts host02
iface host02
bond-slaves swp2
bond-mode 802.3ad
bond-miimon 100
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
bond-lacp-bypass-allow 1
mstpctl-portadminedge yes
mstpctl-bpduguard yes
clag-id 2
40
auto host03
allow-hosts host03
iface host03
bond-slaves swp3
bond-mode 802.3ad
bond-miimon 100
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
bond-lacp-bypass-allow 1
mstpctl-portadminedge yes
mstpctl-bpduguard yes
clag-id 3
#Bridge that connects our peer, uplink to the spines, and the hosts.
auto bridge
iface bridge
bridge-vlan-aware yes
bridge-ports uplinks swp48 peerlink host01 host02 host03
bridge-stp on
bridge-vids 100-2000
mstpctl-treeprio 16384
www.cumulusnetworks.com
41
leaf03
cumulus@leaf03$ cat /etc/network/interfaces
auto eth0
iface eth0
address 192.168.0.92/24
gateway 192.168.0.254
# physical interface configuration
auto swp1
iface swp1
mtu 9000
auto swp2
iface swp2
mtu 9000
auto swp3
iface swp3
mtu 9000
.
.
auto swp52
iface swp52
mtu 9000
# peerlink bond for clag
#Bond for the peer link. MLAG control traffic and data when links are down.
auto peerlink
iface peerlink
bond-slaves swp51 swp52
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
mtu 9000
#VLAN for the MLAG control traffic.
auto peerlink.4094
iface peerlink.4094
address 169.254.255.1/30
clagd-peer-ip 169.254.255.2
clagd-backup-ip 192.168.0.94/24
clagd-sys-mac 44:38:39:ff:00:03
42
www.cumulusnetworks.com
43
bond-min-links 1
bond-xmit-hash-policy layer3+4
bond-lacp-bypass-allow 1
mstpctl-portadminedge yes
mstpctl-bpduguard yes
clag-id 3
#Bridge that connects our peer, uplink to the spines, and the hosts.
auto bridge
iface bridge
bridge-vlan-aware yes
bridge-ports uplinks swp48 peerlink host01 host02 host03
bridge-stp on
bridge-vids 100-2000
mstpctl-treeprio 16384
44
leaf04
cumulus@leaf04$ cat /etc/network/interfaces
auto eth0
iface eth0
address 192.168.0.93/24
gateway 192.168.0.254
# physical interface configuration
auto swp1
iface swp1
mtu 9000
auto swp2
iface swp2
mtu 9000
auto swp3
iface swp3
mtu 9000
.
.
auto swp52
iface swp52
mtu 9000
# peerlink bond for clag
#Bond for the peer link. MLAG control traffic and data when links are down.
auto peerlink
iface peerlink
bond-slaves swp51 swp52
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
mtu 9000
#VLAN for the MLAG control traffic.
auto peerlink.4094
iface peerlink.4094
address 169.254.255.2/30
clagd-peer-ip 169.254.255.1
clagd-backup-ip 192.168.0.92/24
clagd-sys-mac 44:38:39:ff:00:03
www.cumulusnetworks.com
45
46
bond-min-links 1
bond-xmit-hash-policy layer3+4
bond-lacp-bypass-allow 1
mstpctl-portadminedge yes
mstpctl-bpduguard yes
clag-id 3
#Bridge that connects our peer, uplink to the spines, and the hosts.
auto bridge
iface bridge
bridge-vlan-aware yes
bridge-ports uplinks swp48 peerlink host01 host02 host03
bridge-stp on
bridge-vids 100-2000
mstpctl-treeprio 16384
www.cumulusnetworks.com
47
spine01
cumulus@spine01$ sudo vi /etc/network/interfaces
auto eth0
iface eth0
address 192.168.0.94/24
gateway 192.168.0.254
# physical interface configuration
auto swp1
iface swp1
mtu 9000
auto swp2
iface swp2
mtu 9000
auto swp3
iface swp3
mtu 9000
.
.
.
auto swp32
iface swp32
mtu 9000
# peerlink bond for clag
auto peerlink
iface peerlink
bond-slaves swp31 swp32
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
#VLAN for the MLAG control traffic.
auto peerlink.4094
iface peerlink.4094
address 169.254.255.1/30
clagd-enable yes
clagd-peer-ip 169.254.255.2
clagd-backup-ip 192.168.0.95/24
clagd-sys-mac 44:38:39:ff:00:00
48
# leaf01-leaf02 downlink
auto downlink1
allow-leafs downlink2
iface downlink1
bond-slaves swp1 swp2
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
clag-id 1
# leaf03-leaf04 downlink
auto downlink2
allow-leafs downlink2
iface downlink2
bond-slaves swp3 swp4
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
clag-id 2
#Bridge that connects our peer and downlinks to the leafs.
auto bridge
iface bridge
bridge-vlan-aware yes
bridge-ports peerlink downlink1 downlink2
bridge-stp on
bridge-vids 100-2000
mstpctl-treeprio 12288
www.cumulusnetworks.com
49
spine02
cumulus@spine02$ sudo vi /etc/network/interfaces
auto eth0
iface eth0
address 192.168.0.95/24
gateway 192.168.0.254
# physical interface configuration
auto swp1
iface swp1
mtu 9000
auto swp2
iface swp2
mtu 9000
auto swp3
iface swp3
mtu 9000
.
.
.
auto swp32
iface swp32
mtu 9000
# peerlink bond for clag
auto peerlink
iface peerlink
bond-slaves swp31 swp32
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
50
# leaf01-leaf02 downlink
auto downlink1
allow-leafs downlink2
iface downlink1
bond-slaves swp1 swp2
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
clag-id 1
# leaf03-leaf04 downlink
auto downlink2
allow-leafs downlink2
iface downlink2
bond-slaves swp3 swp4
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
clag-id 2
#Bridge that connects our peer and downlinks to the leafs.
auto bridge
iface bridge
bridge-vlan-aware yes
bridge-ports peerlink downlink1 downlink2
bridge-stp on
bridge-vids 100-2000
mstpctl-treeprio 12288
www.cumulusnetworks.com
51
Considerations
Plan cabling
Apply standards and conventions to promote similar configurations. For example, place
stanzas in the same order in configuration files across switches and specify the child
interfaces before the parent interfaces (so a bond member appears earlier in the file than
the bond itself, for example). This allows for standardization and easier maintenance and
troubleshooting, and ease of automation and the use of templates.
Consider naming conventions for consistency, readability, and manageability. Doing so
helps facilitate automation. For example, call your leaf switches leaf01 and leaf02 rather
than leaf1 and leaf02.
Use all lowercase for names
Avoid characters that are not DNS-compatible.
Define child interfaces before using them in parent interfaces. For example, create the
member interfaces of a bond before defining the bond interface itself.
52
Instantiate swp interfaces for using the ifup and ifdown commands.
Tasks
Considerations
3. Verify connectivity.
Use LLDP
(Link Layer Discovery
Protocol)
LLDP is useful to debug or verify cabling between directly attached switches. By default,
Cumulus Linux listens and advertises LLDP packages on all configured Layer 3 routed or
Layer 2 access ports. LLDP is supported on tagged interfaces or those configured as an
802.1q sub interface. The command lldpctl will display a dump of the connected interfaces.
Assign IP address for clagd peerlink. Consider using a link local address (RFC 3927,
169.254/16) to avoid advertising, or an RFC 1918 private address.
Use a very high number VLAN if possible to separate the peer communication traffic from
typical VLANs handling data traffic. Valid VLAN tags end at 4096.
Enable MLAG
Set up MLAG in switch pairs. Theres no particular order necessary for connecting pairs.
Assign clagd-sys-mac
Assign a unique clagd-sys-mac value per pair. This value is used for spanning tree
calculation, so assigning unique values will prevent overlapping MAC addresses.
Assign priority
www.cumulusnetworks.com
53
54