Browse Source

Add instructions for standalone bridge

merge-requests/4/head
Max Erenberg 1 year ago
parent
commit
4ab391e886
  1. 2
      .gitignore
  2. 81
      README.md
  3. 20
      hosts
  4. 37
      hosts.sample
  5. 51
      mail/README.md
  6. 19
      mail/mailman3/mailman3.yml
  7. 46
      outsider/README.md
  8. 2
      roles/systemd_workarounds/tasks/main.yml

2
.gitignore vendored

@ -1,2 +1,4 @@
# Vim
.*.swp
/hosts

81
README.md

@ -5,13 +5,15 @@ run on the CSC servers. The idea is to encourage experimentation without
breaking the real services and causing outages.
## For Windows Users
**Update**: WSL2 doesn't seem to be working too well with LXC. I suggest
using VirtualBox or VMWare instead.
Setup WSL2 and open a terminal to it. See the [official setup instructions](https://docs.microsoft.com/en-ca/windows/wsl/install-win10#manual-installation-steps). Make sure you use Ubuntu/Ubuntu Latest from the Windows Store.
Once setup is complete, run the following command to update everything:
```
sudo apt update && sudo apt full-upgrade -y --auto-remove --fix-broken --fix-missing --fix-policy --show-progress && sudo apt autoclean
```
You can skip the Prerequisites section.
## Prerequisites
This repo consists of several Ansible playbooks which will automate tasks
@ -22,6 +24,75 @@ theoretically also work. The VM should be running some reasonably
recent version of Debian or Ubuntu. 2 CPU cores and 2 GB of RAM
should be sufficient.
**Update**: I previously recommended using a shared bridge interface
in the VM. This appears to be causing issues for VMWare users,
so I am now recommending a standalone bridge instead with NAT masquerading.
The instructions for the shared bridge should still work, but if you are
creating the dev environment from scratch, I suggest using the
standalone bridge instead.
Note that if you do use the standalone bridge, the containers will not
be accessible from outside the VM, so if you need to access one of the
containers from your physical host, you will need to set up TCP forwarding
via `socat` or something similar.
No matter which network setup you decide to use, you will need to manually
create a `hosts` file before running any of the playbooks. Copy the
`hosts.sample` file as a starting point and edit it as needed:
```
cp hosts.sample hosts
```
Make sure you have the `bridge-utils` package installed in the VM.
This should be installed by default on Ubuntu, but you may have to manually
install it on Debian.
Also, make sure you disable the default LXC bridge, as it will interfere
with our own bridge:
```
systemctl stop lxc-net
systemctl mask lxc-net
```
### Standalone bridge
Your /etc/network/interfaces (in the VM) should look like the following:
```
auto enp1s0
iface enp1s0 inet dhcp
auto lxcbr1
iface lxcbr1 inet static
bridge_ports none
bridge_fd 0
bridge_maxwait 0
address 192.168.100.1/24
up iptables -t nat -C POSTROUTING -s 192.168.100.0/24 ! -o lxcbr1 -j MASQUERADE 2>/dev/null || \
iptables -t nat -A POSTROUTING -s 192.168.100.0/24 ! -o lxcbr1 -j MASQUERADE
down iptables -t nat -D POSTROUTING -s 192.168.100.0/24 ! -o lxcbr1 -j MASQUERADE 2>/dev/null || true
```
Replace `ensp1s0` by the default interface on the VM. Replace `192.168.100.1/24`
and `192.168.100.0/24` by whatever IP address and subnet you want to use for the
bridge.
Now bring up the bridge:
```
ifup lxcbr1
```
Make sure you update the `hosts` file to match whichever IP address and subnet
you chose.
Now open `/etc/lxc/default.conf` and make sure it looks like the following:
```
lxc.net.0.type = veth
lxc.net.0.link = lxcbr1
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx
```
(The hwaddr can be different.) This will ensure that new containers
have this configuration by default.
### Shared bridge
The VM should be attached to a bridge interface with NAT forwarding.
QEMU should create a default interface like this called 'virbr0'.
For this tutorial, I am assuming that the interface subnet is
@ -32,13 +103,7 @@ do this via virsh or virt-manager; do not modify the subnet manually
using iproute2. The reason for this is because libvirt needs to know
what the subnet is to setup dnsmasq and iptables properly.
Once the VM is up and running, you will need to create a shared bridge
interface. First, disable the default bridge:
```
systemctl stop lxc-net
systemctl mask lxc-net
```
Then paste the following into /etc/network/interfaces:
Your /etc/network/interfaces should look like the following:
```
iface enp1s0 inet manual

20
hosts

@ -1,20 +0,0 @@
[containers]
dns ansible_lxc_host=dns
mail ansible_lxc_host=mail
coffee ansible_lxc_host=coffee
auth1 ansible_lxc_host=auth1
outsider ansible_lxc_host=outsider
[containers:vars]
ansible_connection = lxc
ansible_python_interpreter = python3
base_domain = csclub.internal
ipv4_subnet = 192.168.122.0/24
ipv4_gateway = 192.168.122.1
upstream_dns = 192.168.122.1
host_ipv4_addr = 192.168.122.226
outsider_ipv4_addr = 192.168.125.2
dns_ipv4_addr = 192.168.122.4
mail_ipv4_addr = 192.168.122.52
coffee_ipv4_addr = 192.168.122.20
auth1_ipv4_addr = 192.168.122.117

37
hosts.sample

@ -0,0 +1,37 @@
[containers]
dns ansible_lxc_host=dns
mail ansible_lxc_host=mail
coffee ansible_lxc_host=coffee
auth1 ansible_lxc_host=auth1
outsider ansible_lxc_host=outsider
[containers:vars]
ansible_connection = lxc
ansible_python_interpreter = python3
base_domain = csclub.internal
# the subnet for the containers
ipv4_subnet = 192.168.100.0/24
# the gateway for the containers - this should be the upstream
# gateway if you are using a shared bridge, or the VM's bridge
# IP address if you are using a standalone bridge.
ipv4_gateway = 192.168.100.1
# the upstream DNS IP address
upstream_dns = 192.168.122.1
# the IP address of the VM - this should be the VM's default outgoing
# IP address if you are using a standalone bridge, or the VM's bridge
# address if you are using a standalone bridge.
host_ipv4_addr = 192.168.100.1
# The IP addresses for the VMs. The outsider IP address does not really
# matter, just make sure it is in a different subnet from the others.
# Make sure to update the IP addresses of the other containers is in the
# ipv4_subnet which you specified above.
outsider_ipv4_addr = 192.168.101.2
dns_ipv4_addr = 192.168.100.4
mail_ipv4_addr = 192.168.100.52
coffee_ipv4_addr = 192.168.100.20
auth1_ipv4_addr = 192.168.100.117

51
mail/README.md

@ -30,16 +30,55 @@ Attach to the mail container and create a new list, e.g. syscom:
cd /var/lib/mailman
bin/newlist -a syscom root@csclub.internal mailman
```
Now on your **real** computer (i.e. not the VM), you are going to visit
the web interface for Mailman to adjust some settings and subscribe
some new users.
First, open `/etc/hosts` on your computer and add the following entry:
### Standalone bridge
If you are using a standalone bridge, unfortunately you will not be
able to access the container directly from your physical host because
it is behind a NAT.
I suggest running socat on the VM for TCP forwarding:
```
apt install socat
socat TCP-LISTEN:80,fork TCP:192.168.100.52:80
```
This will forward requests to port 80 on the VM to port 80 in the
mail container.
Alternatively, you can use iptables:
```
iptables -t nat -A PREROUTING -s 192.168.122.0/24 -p tcp --dport 80 -j DNAT --to-destination 192.168.100.52
```
Replace '192.168.122.0/24' by the subnet of your VM (your physical host
should also be on this subnet), and replace '192.168.100.52' by the IP
of the mail container.
To make sure this iptables rule is applied automatically at startup,
you can install the iptables-persistent package:
```
apt install iptables-persistent
```
You can use `dpkg-reconfigure iptables-persistent` if you ever need to
change the iptables rules which are applied at startup.
Now open `/etc/hosts` on your computer and add the following entry:
```
192.168.122.225 mailman.csclub.internal
```
Replace `192.168.122.225` with the default IP of the VM.
### Shared bridge
If you are using a shared bridge, you can access the container
directly from your physical host. Add the following entry to your
`/etc/hosts`:
```
192.168.122.52 mailman.csclub.internal
192.168.100.52 mailman.csclub.internal
```
Replace `192.168.100.52` with the IP of the mail container.
## Mailman web interface
Now on your physical host, you are going to visit
the web interface for Mailman to adjust some settings and subscribe
some new users.
Now visit http://mailman.csclub.internal/admin/syscom in your browser.
Visit http://mailman.csclub.internal/admin/syscom in your browser.
The admin password is 'mailman' (no quotes).
I suggest going over each setting in the Privacy section and reading it

19
mail/mailman3/mailman3.yml

@ -31,7 +31,7 @@
import_role:
name: ../../roles/systemd_workarounds
vars:
services: [ "memcached" ]
services: [ "memcached", "logrotate" ]
- name: upgrade pip
pip:
executable: pip3
@ -55,6 +55,7 @@
pip:
virtualenv: /opt/mailman3
virtualenv_python: python3
virtualenv_site_packages: yes
name: "{{ item }}"
loop:
- mysqlclient
@ -63,6 +64,17 @@
- mailman
- mailman-web
- mailman-hyperkitty
- name: find the site packages directory in the virtualenv
find:
paths: /opt/mailman3/lib
patterns: "python3*"
file_type: directory
register: find_ret
- name: make sure that global site packages are inherited
file:
name: "{{ item.path }}/no-global-site-packages.txt"
state: absent
loop: "{{ find_ret.files }}"
- name: create mailman3 folder
file:
path: /etc/mailman3
@ -107,6 +119,11 @@
- reload systemd
- restart service
- meta: flush_handlers
- name: stop Mailman 2
systemd:
name: mailman
state: stopped
masked: yes
- name: enable and start new services
systemd:
name: "{{ item }}"

46
outsider/README.md

@ -3,32 +3,30 @@ So this container's a bit special - it represents a host which is **not**
on the UW network. The motivation is to test software which have different
privilege settings for people outside of the local network, e.g. Postfix.
The idea is to route packets from the 'outsider' container to the LXC host
(i.e. the VM), and the VM will then route them to the other containers.
We could've also created an extra container to act as the router, but
that seemed kind of wasteful.
The easiest way to do this, in my opinion, is to simply create a new bridge
with a different subnet. Add the following to your /etc/network/interfaces:
```
auto lxcbr2
iface lxcbr2 inet static
bridge_ports none
bridge_fd 0
bridge_maxwait 0
address 192.168.101.1/24
up iptables -t nat -C POSTROUTING -s 192.168.101.0/24 ! -o lxcbr2 -j MASQUERADE 2>/dev/null || \
iptables -t nat -A POSTROUTING -s 192.168.101.0/24 ! -o lxcbr2 -j MASQUERADE
down iptables -t nat -D POSTROUTING -s 192.168.101.0/24 ! -o lxcbr2 -j MASQUERADE 2>/dev/null || true
```
Then:
```
ifup lxcbr2
```
## Installation
Once you have created the container, add the following iptables rules on
the VM:
```
iptables -t nat -A POSTROUTING -s 192.168.125.0/24 -d 192.168.122.1 -j MASQUERADE
iptables -t nat -A POSTROUTING -s 192.168.125.0/24 ! -d 192.168.122.0/24 -j MASQUERADE
```
I also strongly suggest installing iptables-persistent so that these rules
persist on the next reboot:
```
apt install iptables-persistent
```
The idea here is that packets from the 'outsider' container should only be
**forwarded**, not masqueraded, to the other containers (to preserve its IP
address), unless if it needs to communicate with the outside world (e.g. to
download Debian packages), in which case we need to use NAT because the
iptables rules which libvirt created on your real computer don't take that
subnet into account (run `iptables -t nat -L -v` on your real computer
to see what I mean). 192.168.122.1, which is your real computer, is a special
case because your host does not have a routing table entry for that
subnet, so it wouldn't be able to reply.
Once you have created the container, edit the following line in
`/var/lib/lxc/outsider/config`:
```
lxc.net.0.link = lxcbr2
```
As usual, create the container, start it, and install python3.
Now detach and run the playbook:

2
roles/systemd_workarounds/tasks/main.yml

@ -12,6 +12,8 @@
PrivateTmp=false
PrivateDevices=false
ProtectHome=false
ProtectControlGroups=false
ProtectKernelModules=false
dest: "/etc/systemd/system/{{ item }}.service.d/override.conf"
loop: "{{ services }}"
register: service_overrides

Loading…
Cancel
Save