As I mentioned in my last few posts, in the past I have used Ansible only when I needed it. I think part of the reason for this was because I felt like all of the things I wanted to do that were built into Ansible’s modules were for linux systems. So instead of coding my own Ansible module, I would just write my own Python script.
I recently tried out Cumulus VX, which is a free VM download for learning purposes. I decided to do some basic things with Ansible in Cumulus and I think it is the best OS I have seen to use with Ansible. This is because it is a debian linux machine and you can use all of the standard Ansible modules.
I will take us through a configuration that is pretty simple and familiar if you have read my other Ansible posts.
Topology
Ansible config and data files
Here is our current directory on the control node.
.
├── ansible.cfg
├── hosts-net
├── host_vars
├── sw1.yml
└── sw2.yml
ansible.cfg
[defaults]
inventory = hosts-net
remote_user = cumulus
private_key_file = ~/.ssh/cumulus_rsa
host_key_checking = False
retry_files_enabled = False
Pretty standard setup here, the only difference from my other posts is the rsa key instead of a password. Instructions for passwordless Cumulus.
hosts-net
[cumulus]
sw1 ansible_host=192.0.2.254
sw2 ansible_host=192.0.2.253
Another pretty standard file that just groups all of the cumulus switches together. The IP addresses are from a local OOB network I run on my dev machine.
host_vars
sw1.yml
---
interfaces:
- name: eth0
auto: true
address: 192.0.2.254/24
gateway: 192.0.2.1
- name: swp1
auto: true
address: 192.168.12.1/24
loopbacks:
- 1.1.1.1/32
bgp:
asn: 1
routerid: 1.1.1.1
neighbors:
- address: 192.168.12.2
remoteas: 2
networks:
- 1.1.1.1/32
sw2.yml
---
interfaces:
- name: eth0
auto: true
address: 192.0.2.253/24
gateway: 192.0.2.1
- name: swp1
auto: true
address: 192.168.12.2/24
loopbacks:
- 2.2.2.2/32
bgp:
asn: 2
routerid: 2.2.2.2
neighbors:
- address: 192.168.12.1
remoteas: 1
networks:
- 2.2.2.2/32
We have two interfaces, eth0 is the management interface, swp1 is switch port 1. Both ports are setup as layer 3 interfaces. We also have one loopback per switch. The loopback is advertised to the other switch via BGP.
Playbooks
We are going to write a few playbooks. The first one will backup all the configuration we will be working on.
backup-playbook.yml
---
- hosts: cumulus
become: true
tasks:
- name: Backup interface Configuration
fetch:
src: /etc/network/interfaces
flat: yes
dest: backup/{{inventory_hostname}}/interfaces
- name: Backup FRRouting daemon configuration
fetch:
src: /etc/frr/daemons
flat: yes
dest: backup/{{inventory_hostname}}/daemons
- name: Backup FRRouting configuration
fetch:
src: /etc/frr/frr.conf
flat: yes
dest: backup/{{inventory_hostname}}/frr.conf
We are going to modify the interfaces, enable BGP, and modify BGP settings. This playbook backs up all three files that will be modified by the other playbooks. become: true
tells ansible to elevate itself with sudo. You will need to add the following line to /etc/sudoers
on each switch.
cumulus ALL=(ALL) NOPASSWD: ALL
Running the playbook with ansible-playbook backup-playbook.yml
adds the following files to the backup folder.
.
├── sw1
│ ├── daemons
│ ├── frr.conf
│ └── interfaces
└── sw2
├── daemons
├── frr.conf
└── interfaces
Here are the relative parts of the files.
daemons
zebra=no
bgpd=no
ospfd=no
ospf6d=no
ripd=no
ripngd=no
isisd=no
pimd=no
ldpd=no
nhrpd=no
eigrpd=no
babeld=no
sharpd=no
pbrd=no
frr.conf
log syslog informational
sw1 interfaces
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0
address 192.0.2.254/24
gateway 192.0.2.1
sw2 interfaces
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0
address 192.0.2.253/23
gateway 192.0.2.1
Update OS
In the next playbook we will update Cumulus Linux and install vim on the switches.
update-install-playbook.yml
---
- hosts: cumulus
become: true
tasks:
- name: Update cumulus
apt:
upgrade: dist
update_cache: yes
- name: Install vim
apt:
name: vim
The playbook will run apt update, apt upgrade, and install vim.
Configure interfaces
The next playbook will update the interface configuration file. We accomplish this with a Jinja2 template that loops through the data in host_vars
for each switch.
interface-playbook.yml
---
- hosts: cumulus
become: true
tasks:
- name: Load Interface configuration
template:
src: "templates/interfaces.j2"
dest: "/etc/network/interfaces"
notify: reload networking
handlers:
- name: reload networking
command: /sbin/ifreload -a
interfaces.j2
auto lo
iface lo inet loopback
{% for loopback in loopbacks %}
address {{ loopback }}
{% endfor -%}
{% for interface in interfaces -%}
{% if interface.auto == true -%}
auto {{ interface.name }}
{% endif -%}
iface {{ interface.name }}
{% if interface.address is defined %}
address {{ interface.address }}
{% endif -%}
{% if interface.gateway is defined %}
gateway {{ interface.gateway }}
{% endif -%}
{% endfor -%}
Once the data is populated it is copied to the switch and if there is a change the interfaces are reloaded. This is accomplished using handlers.
Enable FRRouting
The next playbook enables the FRRouting service which is required for the BGP module.
frrouting-playbook.yml
---
- hosts: cumulus
become: true
tasks:
- name: Enable Zebra
lineinfile:
path: /etc/frr/daemons
regexp: '^zebra='
line: 'zebra=yes'
- name: Make sure frr is running
systemd:
name: frr.service
state: started
enabled: yes
This playbook enables Zebra which is required for the other modules to run. It also sets the frr service to start on boot and starts the service if it is not running.
Enable and configure BGP
The next playbook will enable BGP and add the configuration from host_vars
to the frr.conf
file.
bgp-playbook.yml
---
- hosts: cumulus
become: true
tasks:
- name: Enable BGP
lineinfile:
path: "/etc/frr/daemons"
regexp: '^bgpd='
line: 'bgpd=yes'
notify: Restart frr
- name: Update frr configuration
blockinfile:
block: "{{ lookup('template', 'templates/bgp.j2') }}"
dest: "/etc/frr/frr.conf"
notify: Restart frr
handlers:
- name: Restart frr
systemd:
name: frr.service
state: restarted
The file enables BGP by modifying the daemons
file and restarting the frr service. Next it adds the content from the bgp.j2
template to frr.conf
bgp.j2
router bgp {{ bgp.asn }}
bgp router-id {{bgp.routerid }}
{% for neighbor in bgp.neighbors -%}
neighbor {{ neighbor.address }} remote-as {{ neighbor.remoteas }}
{% endfor -%}
{% for network in bgp.networks -%}
network {{ network }}
{% endfor -%}
Here are the modified parts of daemons
and frr.conf
sw1
daemons
zebra=yes
bgpd=yes
--snip--
frr.conf
# BEGIN ANSIBLE MANAGED BLOCK
router bgp 1
bgp router-id 1.1.1.1
neighbor 192.168.12.2 remote-as 2
network 1.1.1.1/32
# END ANSIBLE MANAGED BLOCK
sw2
daemons
zebra=yes
bgpd=yes
--snip--
frr.conf
# BEGIN ANSIBLE MANAGED BLOCK
router bgp 2
bgp router-id 2.2.2.2
neighbor 192.168.12.1 remote-as 1
network 2.2.2.2/32
# END ANSIBLE MANAGED BLOCK
The last playbook runs all of the playbooks in a sequence.
all-playbook.yml
---
- import_playbook: backup-playbook.yml
- import_playbook: update-install-playbook.yml
- import_playbook: interface-playbook.yml
- import_playbook: frrouting-playbook.yml
- import_playbook: bgp-playbook.yml
This is the full file directory structure.
.
├── all-playbook.yml
├── ansible.cfg
├── backup
│ ├── sw1
│ │ ├── daemons
│ │ ├── frr.conf
│ │ └── interfaces
│ └── sw2
│ ├── daemons
│ ├── frr.conf
│ └── interfaces
├── backup-playbook.yml
├── bgp-playbook.yml
├── frrouting-playbook.yml
├── hosts-net
├── host_vars
│ ├── sw1.yml
│ └── sw2.yml
├── interface-playbook.yml
├── templates
│ ├── bgp.j2
│ └── interfaces.j2
└── update-install-playbook.yml
Output
ansible-playbook all-playbook.yml
PLAY [cumulus] ****************************************************
*************
TASK [Gathering Facts] ********************************************
*************
ok: [sw1]
ok: [sw2]
TASK [Backup interface Configuration] *****************************
*************
ok: [sw1]
ok: [sw2]
TASK [Backup FRRouting daemon configuration] ***********************************
ok: [sw1]
ok: [sw2]
TASK [Backup FRRouting configuration] ******************************************
ok: [sw1]
ok: [sw2]
PLAY [cumulus] *****************************************************************
TASK [Gathering Facts] *********************************************************
ok: [sw1]
ok: [sw2]
TASK [Update cumulus] **********************************************************
ok: [sw2]
ok: [sw1]
TASK [Install vim] *************************************************************
ok: [sw1]
ok: [sw2]
PLAY [cumulus] *****************************************************************
TASK [Gathering Facts] *********************************************************
ok: [sw1]
ok: [sw2]
TASK [Load Interface configuration] ********************************************
changed: [sw2]
changed: [sw1]
RUNNING HANDLER [reload networking] ********************************************
changed: [sw1]
changed: [sw2]
PLAY [cumulus] *****************************************************************
TASK [Gathering Facts] *********************************************************
ok: [sw2]
ok: [sw1]
TASK [Enable Zebra] ************************************************************
changed: [sw1]
changed: [sw2]
TASK [Make sure frr is running] ************************************************
changed: [sw1]
changed: [sw2]
PLAY [cumulus] *****************************************************************
TASK [Gathering Facts] *********************************************************
ok: [sw1]
ok: [sw2]
TASK [Enable BGP] **************************************************************
changed: [sw1]
changed: [sw2]
TASK [Update frr configuration] ************************************************
changed: [sw2]
changed: [sw1]
RUNNING HANDLER [Restart frr] **************************************************
changed: [sw1]
changed: [sw2]
PLAY RECAP *********************************************************************
sw1 : ok=17 changed=7 unreachable=0 failed=0
sw2 : ok=17 changed=7 unreachable=0 failed=0
Looks like everything ran as expected. Lets see if it worked.
cumulus@sw1:~$ net show bgp
show bgp ipv4 unicast
=====================
BGP table version is 2, local router ID is 1.1.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
*> 1.1.1.1/32 0.0.0.0 0 32768 i
*> 2.2.2.2/32 192.168.12.2 0 0 2 i
Displayed 2 routes and 2 total paths
show bgp ipv6 unicast
=====================
No BGP prefixes displayed, 0 exist
cumulus@sw2:~$ net show bgp
show bgp ipv4 unicast
=====================
BGP table version is 2, local router ID is 2.2.2.2
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
*> 1.1.1.1/32 192.168.12.1 0 0 1 i
*> 2.2.2.2/32 0.0.0.0 0 32768 i
Displayed 2 routes and 2 total paths
show bgp ipv6 unicast
=====================
No BGP prefixes displayed, 0 exist
The configuration was successful. If we were managing this at scale we could use groups or tags to split the switches up. But for this simple example I wanted to keep it straight forward. Adding more switches is as easy as adding a file to the host_vars
folder. Managing Linux based switches like Cumulus with Ansible feels more natural to me then using a vendor NOS switch like Cisco or Arista. The source code for this post can be found here.