Last updated: 01-03-2018 11:50
Install a Ubuntu Linux Server v17.10 on baremetal or a Hypervisor like Kernel Virtual Machine (KVM) or Oracle VirtualBox as a host for an LXD/LXC container. This host is called lxd.
If VirtualBox is being used ensure that in the network Adapter settings - Advanced settings Promiscuous Mode: is set to Allow All. A second adapter can be enabled if the hardware has two network ports, for example Ethernet and WiFi. This aids with access to the virtual server. Disable Dynamic Host Configuration Protocol (DHCP) for the Ethernet port (enp0s3), create a bridge (br0) and place enp0s3 in the bridge. This bridge will be used later by the BIRD container for network access.
ubuntu@lxd:~$ cd /etc/netplan/ ubuntu@lxd:/etc/netplan$ sudo mv 01-netcfg.yaml 01-netcfg.yaml.bak ubuntu@lxd:/etc/netplan$ cat <<EOM | sudo tee 01-netcfg.yaml # This file describes the network interfaces available on your system # For more information, see netplan(5). network: version: 2 renderer: networkd ethernets: enp0s3: dhcp4: false dhcp6: false enp0s8: dhcp4: true dhcp6: false bridges: br0: dhcp4: true dhcp6: false addresses: ['199.9.9.100/24','2a99:9:9::100/48'] interfaces: ['enp0s3'] parameters: forward-delay: 9 hello-time: 2 EOM
Establish the default editor.
ubuntu@lxd:~$ cat <<EOM | sudo tee --append ~/.bashrc export EDITOR='vi' export VISUAL='vi' EOM ubuntu@lxd:~$ . .bashrc ubuntu@lxd:~$ echo $EDITOR $VISUAL vi vi
Reboot the server.
ubuntu@lxd:~$ sudo shutdown -r now
Install LXD and the LXD Client. The recommended storage backend for LXD is the ZFS filesystem, stored either in a preallocated file or by using Block Storage. To use ZFS support in LXD, update your package list and install the zfsutils-linux also.
ubuntu@lxd:~$ sudo apt update ubuntu@lxd:~$ sudo apt dist-upgrade ubuntu@lxd:~$ sudo apt -y install fping mtr ubuntu@lxd:~$ sudo apt -y install lxd lxd-client zfsutils-linux ubuntu@lxd:~$ sudo sed 's/GRUB_CMDLINE_LINUX_DEFAULT=""/GRUB_CMDLINE_LINUX_DEFAULT="swapaccount=1"/' /etc/default/grub ubuntu@lxd:~$ lxc --version && lxd --version 2.18 2.18
Check that an lxd group was created and the user ubuntu was added to it.
ubuntu@lxd1:~$ cat /etc/group | grep lxd lxd:x:111:ubuntu
If not add the group and add the user to the group. The id -un command simply inserts the current user into the group.
ubuntu@lxd:~$ groupadd lxd ubuntu@lxd:~$ usermod --append --groups lxd `id -un`
Confirm.
ubuntu@lxd:~$ cat /etc/group | grep lxd lxd:x:111:ubuntu
Reboot.
ubuntu@lxd:~$ sudo shutdown --reboot now
Login after reboot and confirm the LXD Container service is running.
ubuntu@lxd:~$ systemctl status lxd-containers.service ● lxd-containers.service - LXD - container startup/shutdown Loaded: loaded (/lib/systemd/system/lxd-containers.service; enabled; vendor p Active: active (exited) since Mon 2017-11-20 11:21:44 EAT; 10min ago Docs: man:lxd(1) Process: 890 ExecStart=/usr/bin/lxd activateifneeded (code=exited, status=0/SU Main PID: 890 (code=exited, status=0/SUCCESS) Tasks: 0 (limit: 4915) Memory: 0B CPU: 0 CGroup: /system.slice/lxd-containers.service Nov 20 11:21:42 lxd1 systemd[1]: Starting LXD - container startup/shutdown... Nov 20 11:21:44 lxd1 systemd[1]: Started LXD - container startup/shutdown.
Setup storage and networking for LXD.
ubuntu@lxd:~$ sudo lxd init [sudo] password for ubuntu: Do you want to configure a new storage pool (yes/no) [default=yes]? Name of the new storage pool [default=default]: Name of the storage backend to use (dir, btrfs, lvm, zfs) [default=zfs]: Create a new ZFS pool (yes/no) [default=yes]? Would you like to use an existing block device (yes/no) [default=no]? Size in GB of the new loop device (1GB minimum) [default=15GB]: 6 Would you like LXD to be available over the network (yes/no) [default=no]? yes Address to bind LXD to (not including port) [default=all]: Port to bind LXD to [default=8443]: Trust password for new clients: ubuntu Again: ubuntu Would you like stale cached images to be updated automatically (yes/no) [default=yes]? Would you like to create a new network bridge (yes/no) [default=yes]? no LXD has been successfully configured.
Create an Ubuntu 17.10 Artful Ardvark container.
ubuntu@lxd:~$ lxc init ubuntu:17.10 rs Creating rs Retrieving image: rootfs: 82% (93.94kB/s) The container you are starting doesn't have any network attached to it. To create a new network, use: lxc network create To attach a network to a container, use: lxc network attach ubuntu@lxd:~$ lxc list +------+---------+------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+------+------+------------+-----------+ | rs | RUNNING | | | PERSISTENT | 0 | +------+---------+------+------+------------+-----------+ ubuntu@lxd:~$ lxc profile copy default br0-profile ubuntu@lxd:~$ lxc profile edit br0-profile
Edit the container configuration profiles.
ubuntu@lxd:~$ lxc profile edit br0-profile ### This is a yaml representation of the profile. config: {} description: Network LXD profile devices: enp0s3: nictype: bridged parent: br0 type: nic root: path: / pool: default type: disk name: br0-profile used_by: []
Edit the rs container configuration file with the new br0-profile profile.
ubuntu@lxd1:~$ lxc config edit rs profiles: - br0-profile
Start the container rs.
ubuntu@lxd:~$ lxc start rs
Check the container.
ubuntu@lxd:~$ lxc list +------+---------+---------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+---------------------+------+------------+-----------+ | rs | RUNNING | 192.168.89.5 (eth0) | | PERSISTENT | 0 | +------+---------+---------------------+------+------------+-----------+
Connect to the container rs and update.
ubuntu@lxd:~$ lxc exec rs /bin/bash root@rs:~# apt update root@rs:~# apt upgrade
Set the container to autostart.
ubuntu@lxd:~$ lxc config set rs boot.autostart true
Login to the new rs container and change the user password.
root@rs:~# passwd ubuntu Enter new UNIX password: ubuntu Retype new UNIX password: ubuntu
Allow login with the password. By default Ubuntu cloud images do not allow logging in via password so enable tunneled clear text passwords and challenge-response passwords.
root@rs:~# sed -ibak 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config root@rs:~# sed -ibak 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/' /etc/ssh/sshd_config root@rs:~# systemctl restart sshd
Set locale as required.
root@rs:~# locale-gen en_IE.UTF-8 Generating locales (this might take a while)... en_IE.UTF-8... done Generation complete. root@rs:~# dpkg-reconfigure locales Generating locales (this might take a while)... en_GB.UTF-8... done en_IE.UTF-8... done Generation complete. root@rs:~# update-locale LC_ALL=en_IE.UTF-8 root@rs:~# locale LANG=en_IE.UTF-8 LANGUAGE=en_IE:en LC_CTYPE="en_IE.UTF-8" LC_NUMERIC="en_IE.UTF-8" LC_TIME="en_IE.UTF-8" LC_COLLATE="en_IE.UTF-8" LC_MONETARY="en_IE.UTF-8" LC_MESSAGES="en_IE.UTF-8" LC_PAPER="en_IE.UTF-8" LC_NAME="en_IE.UTF-8" LC_ADDRESS="en_IE.UTF-8" LC_TELEPHONE="en_IE.UTF-8" LC_MEASUREMENT="en_IE.UTF-8" LC_IDENTIFICATION="en_IE.UTF-8" LC_ALL=en_IE.UTF-8
Find the interface.
root@rs:~# netstat -i Kernel Interface table Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth0 1500 21070 0 0 0 17551 0 0 0 BMRU lo 65536 36 0 0 0 36 0 0 0 LRU
Set the IP addresses on the domain.
root@rs:~# cd /etc/netplan/ root@rs:/etc/netplan# mv 50-cloud-init.yaml 50-cloud-init.yaml.bak root@rs:/etc/netplan# cat <<EOM | tee 50-cloud-init.yaml # For more information, see netplan(5). network: version: 2 renderer: networkd ethernets: eth0: dhcp4: false dhcp6: false addresses: ['199.9.9.1/24','2a99:9:9::1/48'] EOM root@rs:~# netplan apply
Confirm the IP addresses.
root@rs:~# ip address show dev eth0 | grep inet | awk '{print $2}' 199.9.9.1/24 2a99:9:9::1/48 2a99:9:9:0:216:3eff:fedb:2a3b/48 fe80::216:3eff:fedb:2a3b/64
Create a backup of the image and move it to a safe repository.
ubuntu@lxd:~$ lxc stop rs ubuntu@lxd:~$ lxc publish rs/ub-17-10 --alias ub-17-10-lxc Container published with fingerprint: 5b0024fc15cca9672e7d8b3c4013091da4d9b8ee5615c4efddeea79caaead39b ubuntu@lxd:~$ lxc image export ub-17-10-lxc ~/ub-17-10-lxc.tgz Image exported successfully!
Install the BIRD Routing Daemon.
ubuntu@rs:~$ sudo apt -y install bird
Add the ubuntu user to the newly created bird group.
ubuntu@rs:~$ cat /etc/group | grep bird bird:x:117: ubuntu@rs:~$ sudo usermod --append --groups bird ubuntu ubuntu@rs1:~$ cat /etc/group | grep bird bird:x:117:ubuntu
Logout and back in again. Confirm user ubuntu is added to the bird group.
ubuntu@rs1:~$ id uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu),4(adm),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),110(lxd),115(netdev),116(bird)
Review the /etc/bird directory.
ubuntu@rs:~$ cd /etc/bird ubuntu@rs:/etc/bird$ ls -la total 20 drwxr-x--- 2 bird bird 4096 Oct 10 08:48 . drwxr-xr-x 91 root root 4096 Oct 10 08:52 .. -rw-r----- 1 bird bird 1007 Sep 18 2016 bird6.conf -rw-r----- 1 bird bird 1007 Sep 18 2016 bird.conf -rw-r--r-- 1 root root 51 Sep 18 2016 envvars
Copy original bird.conf and bird6.conf files to archive.
ubuntu@rs:/etc/bird$ sudo mv bird.conf bird.conf.orig ubuntu@rs:/etc/bird$ sudo mv bird6.conf bird6.conf.orig
ubuntu@rs:~$ cat <<EOM | sudo tee /etc/bird/bird.conf log syslog all; router id 199.9.9.1; define LOCAL_AS = 5999; template bgp PEERS { rs client; local as LOCAL_AS; import all; export all; } protocol device { scan time 10; } protocol kernel { export all; scan time 15; } protocol bgp AS5111 from PEERS { neighbor 199.9.9.11 as 5111; } protocol bgp AS5222 from PEERS { neighbor 199.9.9.22 as 5222; } EOM
Change the bird.conf owner and group to bird and bird. Also change the file permissions to read-write for the user (rw-
), read for the group (r--
) and no rights for others (---
).
ubuntu@rs:~$ sudo chown bird:bird /etc/bird/bird.conf ubuntu@rs:~$ sudo chmod 640 /etc/bird/bird.conf
ubuntu@rs:~$ cat <<EOM | sudo tee /etc/bird/bird6.conf log syslog all; router id 199.9.9.1; define LOCAL_AS = 5999; template bgp PEERS { rs client; local as LOCAL_AS; import all; export all; } protocol device { scan time 10; } protocol kernel { export all; scan time 15; } protocol bgp AS5111 from PEERS { neighbor 2a99:9:9::11 as 5111; } protocol bgp AS5222 from PEERS { neighbor 2a99:9:9::22 as 5222; } EOM
Change the bird6.conf owner and group to bird and bird. Also change the file permissions to read-write for the user (rw-
), read for the group (r--
) and no rights for others (---
).
ubuntu@rs:~$ sudo chown bird:bird /etc/bird/bird6.conf ubuntu@rs:~$ sudo chmod 640 /etc/bird/bird6.conf
Use the systemctl service manager to enable the BIRD daemons. This hooks bird and bird6 into relevant places, so that it will automatically start on boot. The start the daemon. Confirm the daemon has started successfully.
ubuntu@rs:~$ sudo systemctl enable bird Synchronizing state of bird.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable bird ubuntu@rs:~$ sudo systemctl start bird ubuntu@rs:~$ sudo systemctl enable bird6 Synchronizing state of bird6.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable bird6 ubuntu@rs:~$ sudo systemctl start bird6
Confirm that the BIRD Server is routing.
root@rs:~# birdc show protocol BIRD 1.6.3 ready. name proto table state since info device1 Device master up 09:53:16 kernel1 Kernel master up 09:53:16 AS5111 BGP master up 09:53:21 Established AS5222 BGP master up 09:53:21 Established root@rs:~# birdc6 show protocol BIRD 1.6.3 ready. name proto table state since info device1 Device master up 09:53:18 kernel1 Kernel master up 09:53:18 AS5111 BGP master up 09:53:23 Established AS5222 BGP master up 09:53:23 Established root@rs:~# birdc show route BIRD 1.6.3 ready. 199.1.1.0/24 via 199.9.9.11 on eth0 [AS5111 09:53:21] * (100) [AS5111i] 199.2.2.0/24 via 199.9.9.22 on eth0 [AS5222 09:53:21] * (100) [AS5222i] root@rs:~# birdc6 show route BIRD 1.6.3 ready. 2a99:2:2::/48 via 2a99:9:9::22 on eth0 [AS5222 09:53:22] * (100) [AS5222i] 2a99:1:1::/48 via 2a99:9:9::11 on eth0 [AS5111 09:53:23] * (100) [AS5111i]