4ab391e886 | ||
---|---|---|
auth1 | ||
coffee | ||
dns | ||
outsider | ||
roles | ||
.gitignore | ||
README.md | ||
ansible.cfg | ||
hosts.sample |
README.md
syscom-dev-environment
The objective of this repo is to allow syscom members to create a local development environment which is reasonably close to the services which run on the CSC servers. The idea is to encourage experimentation without breaking the real services and causing outages.
For Windows Users
Update: WSL2 doesn't seem to be working too well with LXC. I suggest using VirtualBox or VMWare instead.
Setup WSL2 and open a terminal to it. See the official setup instructions. Make sure you use Ubuntu/Ubuntu Latest from the Windows Store.
Once setup is complete, run the following command to update everything:
sudo apt update && sudo apt full-upgrade -y --auto-remove --fix-broken --fix-missing --fix-policy --show-progress && sudo apt autoclean
Prerequisites
This repo consists of several Ansible playbooks which will automate tasks in LXC containers. I strongly recommend creating a VM and running the containers inside the VM to avoid screwing up the network interfaces on your real computer. I am using KVM + QEMU, but VirtualBox should theoretically also work. The VM should be running some reasonably recent version of Debian or Ubuntu. 2 CPU cores and 2 GB of RAM should be sufficient.
Update: I previously recommended using a shared bridge interface in the VM. This appears to be causing issues for VMWare users, so I am now recommending a standalone bridge instead with NAT masquerading. The instructions for the shared bridge should still work, but if you are creating the dev environment from scratch, I suggest using the standalone bridge instead.
Note that if you do use the standalone bridge, the containers will not
be accessible from outside the VM, so if you need to access one of the
containers from your physical host, you will need to set up TCP forwarding
via socat
or something similar.
No matter which network setup you decide to use, you will need to manually
create a hosts
file before running any of the playbooks. Copy the
hosts.sample
file as a starting point and edit it as needed:
cp hosts.sample hosts
Make sure you have the bridge-utils
package installed in the VM.
This should be installed by default on Ubuntu, but you may have to manually
install it on Debian.
Also, make sure you disable the default LXC bridge, as it will interfere with our own bridge:
systemctl stop lxc-net
systemctl mask lxc-net
Standalone bridge
Your /etc/network/interfaces (in the VM) should look like the following:
auto enp1s0
iface enp1s0 inet dhcp
auto lxcbr1
iface lxcbr1 inet static
bridge_ports none
bridge_fd 0
bridge_maxwait 0
address 192.168.100.1/24
up iptables -t nat -C POSTROUTING -s 192.168.100.0/24 ! -o lxcbr1 -j MASQUERADE 2>/dev/null || \
iptables -t nat -A POSTROUTING -s 192.168.100.0/24 ! -o lxcbr1 -j MASQUERADE
down iptables -t nat -D POSTROUTING -s 192.168.100.0/24 ! -o lxcbr1 -j MASQUERADE 2>/dev/null || true
Replace ensp1s0
by the default interface on the VM. Replace 192.168.100.1/24
and 192.168.100.0/24
by whatever IP address and subnet you want to use for the
bridge.
Now bring up the bridge:
ifup lxcbr1
Make sure you update the hosts
file to match whichever IP address and subnet
you chose.
Now open /etc/lxc/default.conf
and make sure it looks like the following:
lxc.net.0.type = veth
lxc.net.0.link = lxcbr1
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx
(The hwaddr can be different.) This will ensure that new containers have this configuration by default.
Shared bridge
The VM should be attached to a bridge interface with NAT forwarding.
QEMU should create a default interface like this called 'virbr0'.
For this tutorial, I am assuming that the interface subnet is
192.168.122.0/24, and the bridge IP address on the host is 192.168.122.1.
If you decide to use a different subnet, make sure to update the hosts
file accordingly. If you need to edit the subnet which QEMU uses,
do this via virsh or virt-manager; do not modify the subnet manually
using iproute2. The reason for this is because libvirt needs to know
what the subnet is to setup dnsmasq and iptables properly.
Your /etc/network/interfaces should look like the following:
iface enp1s0 inet manual
auto lxcbr0
iface lxcbr0 inet dhcp
bridge_ports enp1s0
bridge_fd 0
bridge_maxwait 0
Replace enp1s0 by the name of the default interface in the VM. Then, restart the VM.
If default interface is not eth0
then update roles/network_setup/templates/interfaces.j2
and dns/templates/dnsmasq.conf.j2
Once you have restarted the VM, take note of its IP address on lxcbr0,
and write it to the variable host_ipv4_addr
in the hosts
file.
Creating the LXC containers
Start a root shell with sudo -s
.
Install the lxc-utils package if you have not done so already:
apt update && apt install lxc-utils
For the time being, it is necessary to manually create each container and to install python3 in it before running the corresponding playbooks. For example, to setup the DNS container:
lxc-create -t download -n dns -- -d debian -r buster -a amd64
lxc-start dns
lxc-attach dns
apt update
apt install -y python3
You can now press Ctrl+D to exit the LXC shell.
The containers should be privileged since the CSC currently uses privileged LXC containers. If we switch to unprivileged containers in the future, this repo should be correspondingly updated.
It is also necessary to have Ansible and the Python LXC driver installed on the host where the LXC containers are running. e.g. for Debian:
apt install -y ansible python3-lxc
Now we are ready to run the playbook:
ansible-playbook dns/main.yml
If you see a whole bunch of errors like
RuntimeError: cannot release un-acquired lock
it is safe to ignore those. Here is the GitHub issue if you are interested.