Local development environment for syscom members.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
Max Erenberg 2438b1c1d9 add UWLDAP OU to auth1 1 year ago
auth1 add UWLDAP OU to auth1 1 year ago
cobalamin add Kerberized NFS 1 year ago
coffee add Kerberized NFS 1 year ago
dns add uw00 1 year ago
fs00 add muttrc and syscom auxiliary groups 1 year ago
group_vars add localhost playbook 1 year ago
localhost always load sysctl settings 1 year ago
mail fix LDAP URL in Postfix conf 1 year ago
outsider add localhost playbook 1 year ago
phosphoric-acid add Kerberos policy 1 year ago
roles add localhost playbook 1 year ago
uw00 add uw00 1 year ago
.gitignore Add instructions for standalone bridge 2 years ago
README.md add localhost playbook 1 year ago
ansible.cfg add localhost playbook 1 year ago
hosts add localhost playbook 1 year ago
pyceo.md add instructions for pyceo 1 year ago

README.md

syscom-dev-environment

The objective of this repo is to allow syscom members to create a local development environment which is reasonably close to the services which run on the CSC servers. The idea is to encourage experimentation without breaking the real services and causing outages.

Prerequisites

This repo consists of several Ansible playbooks which will automate tasks in LXC containers. I strongly recommend creating a VM and running the containers inside the VM to avoid screwing up the network interfaces on your real computer. I am using KVM + QEMU, but VirtualBox should theoretically also work. The VM should be running some reasonably recent version of Debian or Ubuntu. 2 CPU cores, 2 GB of RAM and 30 GB of disk space should be sufficient.

Note: The localhost playbook has only been tested on Debian 10.9.0 (standard edition, i.e. no desktop). Other Debian variants should theoretically work, but have been tested. In particular, at the time of this writing, I suggest staying away from Debian 10.10.0, as it has a kernel bug which breaks LXC.

The first thing which you need to do is read the group_vars/all.yml file and see if there are any parameters which you would like to change. In particular, you may wish to set upstream_dns to the DNS server used by the VM (check /etc/resolv.conf).

localhost playbook

The playbook localhost/main.yml should, theoretically, create all of the necessary containers and run the playbooks individually. If this doesn't work, please file a bug (make sure you test it on a "blank slate" first, i.e. destroy all existing containers).

First, install Ansible:

apt install ansible

Then run the playbook:

ansible-playbook localhost/main.yml

In each container folder, you will find instructions to setup the container manually in the README. Ignore those; they are from a time before the localhost playbook existed. The other information on the README will still be useful though.

Manual setup

You should only do this if the localhost playbook above is not working.

Important: You need to enable packet forwarding in the VM. Add or uncomment the following line in /etc/sysctl.conf:

net.ipv4.ip_forward=1

Then run:

sysctl -p

On Ubuntu, make sure you disable the default LXC bridge, as it will interfere with our own bridge:

systemctl stop lxc-net
systemctl mask lxc-net

Standalone bridge

Your /etc/network/interfaces (in the VM) should look like the following:

auto enp1s0
iface enp1s0 inet dhcp

auto lxcbr1
iface lxcbr1 inet static
    bridge_ports none
    bridge_fd 0
    bridge_maxwait 0
    address 192.168.100.1/24
    up iptables -t nat -C POSTROUTING -s 192.168.100.0/24 ! -o lxcbr1 -j MASQUERADE 2>/dev/null || \
       iptables -t nat -A POSTROUTING -s 192.168.100.0/24 ! -o lxcbr1 -j MASQUERADE
    down iptables -t nat -D POSTROUTING -s 192.168.100.0/24 ! -o lxcbr1 -j MASQUERADE 2>/dev/null || true

Replace ensp1s0 by the default interface on the VM. Replace 192.168.100.1/24 and 192.168.100.0/24 by whatever IP address and subnet you want to use for the bridge.

Now bring up the bridge:

ifup lxcbr1

Make sure you update the hosts file to match whichever IP address and subnet you chose.

Now open /etc/lxc/default.conf and make sure it looks like the following:

lxc.net.0.type = veth
lxc.net.0.link = lxcbr1
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx

(The hwaddr can be different.) This will ensure that new containers have this configuration by default.

Creating the LXC containers

Start a root shell with sudo -s.

Install the lxc-utils package if you have not done so already:

apt update && apt install lxc-utils

For the time being, it is necessary to manually create each container and to install python3 in it before running the corresponding playbooks. For example, to setup the DNS container, run the following as root:

lxc-create -t download -n dns -- -d debian -r buster -a amd64
lxc-start -n dns
chroot /var/lib/lxc/dns/rootfs
echo 'nameserver 1.1.1.1' > /etc/resolv.conf
apt update
apt install -y python3
exit

We are using chroot because the network interfaces have not been setup in the container yet (they will be setup via a playbook).

The containers should be privileged since the CSC currently uses privileged LXC containers. If we switch to unprivileged containers in the future, this repo should be correspondingly updated.

It is also necessary to have Ansible and the Python LXC driver installed on the host where the LXC containers are running. e.g. for Debian:

apt install -y ansible python3-lxc

Now we are ready to run the playbook:

ansible-playbook dns/main.yml

If you see a whole bunch of errors like

RuntimeError: cannot release un-acquired lock

it is safe to ignore those. Here is the GitHub issue if you are interested.