Local development environment for syscom members.
Go to file
Max Erenberg a534a91985 first commit 2021-04-29 21:13:38 +00:00
coffee first commit 2021-04-29 21:13:38 +00:00
dns first commit 2021-04-29 21:13:38 +00:00
mail first commit 2021-04-29 21:13:38 +00:00
outsider first commit 2021-04-29 21:13:38 +00:00
roles first commit 2021-04-29 21:13:38 +00:00
README.md first commit 2021-04-29 21:13:38 +00:00
ansible.cfg first commit 2021-04-29 21:13:38 +00:00
hosts first commit 2021-04-29 21:13:38 +00:00

README.md

syscom-playground

The objective of this repo is to allow syscom members to create a local development environment which is reasonably close to the services which run on the CSC servers. The idea is to encourage experimentation without breaking the real services and causing outages.

Prerequisites

This repo consists of several Ansible playbooks which will automate tasks in LXC containers. I strongly recommend creating a VM and running the containers inside the VM to avoid screwing up the network interfaces on your real computer. I am using KVM + QEMU, but VirtualBox should theoretically also work. The VM should be running some reasonably recent version of Debian or Ubuntu. 2 CPU cores and 2 GB of RAM should be sufficient.

The VM should be attached to a bridge interface with NAT forwarding. QEMU should create a default interface like this called 'virbr0'. For this tutorial, I am assuming that the interface subnet is 192.168.122.0/24, and the bridge IP address on the host is 192.168.122.1. If you decide to use a different subnet, make sure to update the hosts file accordingly. If you need to edit the subnet which QEMU uses, do this via virsh or virt-manager; do not modify the subnet manually using iproute2. The reason for this is because libvirt needs to know what the subnet is to setup dnsmasq and iptables properly.

Once the VM is up and running, you will need to create a shared bridge interface. First, disable the default bridge:

systemctl stop lxc-net
systemctl mask lxc-net

Then paste the following into /etc/network/interfaces:

iface enp1s0 inet manual

auto lxcbr0
iface lxcbr0 inet dhcp
	bridge_ports enp1s0
	bridge_fd 0
	bridge_maxwait 0

Replace enp1s0 by the name of the default interface in the VM. Then, restart the VM.

Once you have restarted the VM, take note of its IP address on lxcbr0, and write it to the variable host_ipv4_addr in the hosts file.

Creating the LXC containers

Install the lxc-utils package if you have not done so already:

apt install lxc-utils

For the time being, it is necessary to manually create each container and to install python3 in it before running the corresponding playbooks. For example, to setup the DNS container:

lxc-create -t download -n dns -- -d debian -r buster -a amd64
lxc-start dns
lxc-attach dns
apt update
apt install python3
exit

The containers should be privileged since the CSC currently uses privileged LXC containers. If we switch to unprivileged containers in the future, this repo should be correspondingly updated.

It is also necessary to have Ansible and the Python LXC driver installed on the host where the LXC containers are running. e.g. for Debian:

apt install ansible python3-lxc
``
Now we are ready to run the playbook:

ansible-playbook dns/main.yml

If you see a whole bunch of errors like

RuntimeError: cannot release un-acquired lock

it is safe to ignore those. [Here](https://github.com/lxc/python3-lxc/issues/11)
is the GitHub issue if you are interested.