add localhost playbook

This commit is contained in:
Max Erenberg 2021-07-10 23:16:47 -04:00
parent 394d7f535f
commit f82a70424c
32 changed files with 557 additions and 408 deletions

104
README.md
View File

@ -4,50 +4,52 @@ development environment which is reasonably close to the services which
run on the CSC servers. The idea is to encourage experimentation without
breaking the real services and causing outages.
## For Windows Users
**Update**: WSL2 doesn't seem to be working too well with LXC. I suggest
using VirtualBox or VMWare instead.
Setup WSL2 and open a terminal to it. See the [official setup instructions](https://docs.microsoft.com/en-ca/windows/wsl/install-win10#manual-installation-steps). Make sure you use Ubuntu/Ubuntu Latest from the Windows Store.
Once setup is complete, run the following command to update everything:
```
sudo apt update && sudo apt full-upgrade -y --auto-remove --fix-broken --fix-missing --fix-policy --show-progress && sudo apt autoclean
```
## Prerequisites
This repo consists of several Ansible playbooks which will automate tasks
in LXC containers. I strongly recommend creating a VM and running the
containers inside the VM to avoid screwing up the network interfaces
on your real computer. I am using KVM + QEMU, but VirtualBox should
theoretically also work. The VM should be running some reasonably
recent version of Debian or Ubuntu. 2 CPU cores, 1 GB of RAM and
recent version of Debian or Ubuntu. 2 CPU cores, 2 GB of RAM and
30 GB of disk space should be sufficient.
**Update**: I previously recommended using a shared bridge interface
in the VM. This appears to be causing issues for VMWare users,
so I am now recommending a standalone bridge instead with NAT masquerading.
The instructions for the shared bridge should still work, but if you are
creating the dev environment from scratch, I suggest using the
standalone bridge instead.
**Note**: The localhost playbook has only been tested on
[Debian 10.9.0](https://cdimage.debian.org/mirror/cdimage/archive/10.9.0-live/amd64/iso-hybrid/debian-live-10.9.0-amd64-standard.iso)
(standard edition, i.e. no desktop). Other Debian variants should
theoretically work, but have been tested.
In particular, at the time of this writing, I suggest staying away
from Debian 10.10.0, as it has a [kernel bug which breaks LXC](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=990072).
Note that if you do use the standalone bridge, the containers will not
be accessible from outside the VM, so if you need to access one of the
containers from your physical host, you will need to set up TCP forwarding
via `socat` or something similar.
The first thing which you need to do is read the group_vars/all.yml file
and see if there are any parameters which you would like to change. In
particular, you may wish to set upstream_dns to the DNS server used
by the VM (check /etc/resolv.conf).
No matter which network setup you decide to use, you will need to manually
create a `hosts` file before running any of the playbooks. Copy the
`hosts.sample` file as a starting point and edit it as needed:
## localhost playbook
The playbook localhost/main.yml should, theoretically, create all of the
necessary containers and run the playbooks individually. If this doesn't
work, please file a bug (make sure you test it on a "blank slate" first,
i.e. destroy all existing containers).
First, install Ansible:
```
cp hosts.sample hosts
apt install ansible
```
**Important**: You will need to edit the hosts file, particularly the
value of `upstream_dns`. It should be one of the nameservers which the
VM is using (check /etc/resolv.conf).
Then run the playbook:
```
ansible-playbook localhost/main.yml
```
**Also important**: You need to enable packet forwarding in the VM. Add
In each container folder, you will find instructions to setup the container
manually in the README. Ignore those; they are from a time before the
localhost playbook existed. The other information on the README will still
be useful though.
## Manual setup
You should only do this if the localhost playbook above is not working.
**Important**: You need to enable packet forwarding in the VM. Add
or uncomment the following line in `/etc/sysctl.conf`:
```
net.ipv4.ip_forward=1
@ -57,11 +59,7 @@ Then run:
sysctl -p
```
Make sure you have the `bridge-utils` package installed in the VM.
This should be installed by default on Ubuntu, but you may have to manually
install it on Debian.
Also, make sure you disable the default LXC bridge, as it will interfere
On Ubuntu, make sure you disable the default LXC bridge, as it will interfere
with our own bridge:
```
systemctl stop lxc-net
@ -106,41 +104,7 @@ lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx
(The hwaddr can be different.) This will ensure that new containers
have this configuration by default.
### Shared bridge
The VM should be attached to a bridge interface with NAT forwarding.
QEMU should create a default interface like this called 'virbr0'.
For this tutorial, I am assuming that the interface subnet is
192.168.122.0/24, and the bridge IP address on the host is 192.168.122.1.
If you decide to use a different subnet, make sure to update the `hosts`
file accordingly. If you need to edit the subnet which QEMU uses,
do this via virsh or virt-manager; do not modify the subnet manually
using iproute2. The reason for this is because libvirt needs to know
what the subnet is to setup dnsmasq and iptables properly.
Your /etc/network/interfaces should look like the following:
```
iface enp1s0 inet manual
auto lxcbr0
iface lxcbr0 inet dhcp
bridge_ports enp1s0
bridge_fd 0
bridge_maxwait 0
```
Replace enp1s0 by the name of the default interface in the VM.
Then, restart the VM.
Now open `/etc/lxc/default.conf` and make sure it looks like the following:
```
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx
```
(The hwaddr can be different.) This will ensure that new containers
have this configuration by default.
## Creating the LXC containers
### Creating the LXC containers
Start a root shell with `sudo -s`.
Install the lxc-utils package if you have not done so already:

View File

@ -1,2 +1,3 @@
[defaults]
inventory = hosts
retry_files_enabled = False

View File

@ -124,6 +124,14 @@
- libpam-krb5
- libsasl2-modules-gssapi-mit
- sasl2-bin
- name: create new realm
command:
cmd: krb5_newrealm
# This is the KDC database master key
stdin: |
krb5
krb5
creates: /var/lib/krb5kdc/principal
- name: override systemd services for Kerberos
import_role:
name: ../roles/systemd_workarounds
@ -147,14 +155,6 @@
dest: /etc/krb5kdc/kadm5.acl
notify:
- restart kdc
- name: create new realm
command:
cmd: krb5_newrealm
# This is the KDC database master key
stdin: |
krb5
krb5
creates: /var/lib/krb5kdc/principal
- meta: flush_handlers
- name: add default policy
command:

36
group_vars/all.yml Normal file
View File

@ -0,0 +1,36 @@
---
ansible_python_interpreter: python3
base_domain: csclub.internal
ldap_base: "{{ base_domain.split('.') | map('regex_replace', '^(.*)$', 'dc=\\1') | join(',') }}"
krb_realm: "{{ base_domain.upper() }}"
# the subnet for the containers
ipv4_subnet: 192.168.100.0/24
# the gateway for the containers - this should be the IP
# address of lxcbr1
ipv4_gateway: 192.168.100.1
# The upstream DNS IP address (check your resolv.conf).
# The address below is one of Cloudflare's public DNS resolvers.
# Replace this with your local DNS resolver if desired.
upstream_dns: 1.0.0.1
# The IP addresses for the VMs.
# Make sure the IP addresses of the other containers are in the
# ipv4_subnet which you specified above.
dns_ipv4_addr: 192.168.100.4
mail_ipv4_addr: 192.168.100.52
coffee_ipv4_addr: 192.168.100.20
auth1_ipv4_addr: 192.168.100.117
fs00_ipv4_addr: 192.168.100.35
phosphoric_acid_ipv4_addr: 192.168.100.114
cobalamin_ipv4_addr: 192.168.100.18
uw00_ipv4_addr: 192.168.100.234
# The outsider is on a different subnet than the other containers
outsider_ipv4_subnet: 192.168.101.0/24
# This should be the IP address of lxcbr2
outsider_ipv4_gateway: 192.168.101.1
# The IP address of the outsider container
outsider_ipv4_addr: 192.168.101.2

View File

@ -1,47 +0,0 @@
[containers]
dns ansible_lxc_host=dns
mail ansible_lxc_host=mail
coffee ansible_lxc_host=coffee
auth1 ansible_lxc_host=auth1
fs00 ansible_lxc_host=fs00
phosphoric-acid ansible_lxc_host=phosphoric-acid
cobalamin ansible_lxc_host=cobalamin
outsider ansible_lxc_host=outsider
uw00 ansible_lxc_host=uw00
[containers:vars]
ansible_connection = lxc
ansible_python_interpreter = python3
base_domain = csclub.internal
ldap_base = "{{ base_domain.split('.') | map('regex_replace', '^(.*)$', 'dc=\\1') | join(',') }}"
krb_realm = "{{ base_domain.upper() }}"
# the subnet for the containers
ipv4_subnet = 192.168.100.0/24
# the gateway for the containers - this should be the upstream
# gateway if you are using a shared bridge, or the VM's bridge
# IP address if you are using a standalone bridge.
ipv4_gateway = 192.168.100.1
# the upstream DNS IP address
upstream_dns = 192.168.122.1
# the IP address of the VM - this should be the VM's default outgoing
# IP address if you are using a shared bridge, or the VM's bridge
# address if you are using a standalone bridge.
host_ipv4_addr = 192.168.100.1
# The IP addresses for the VMs. The outsider IP address does not really
# matter, just make sure it is in a different subnet from the others.
# Make sure the IP addresses of the other containers are in the
# ipv4_subnet which you specified above.
outsider_ipv4_addr = 192.168.101.2
dns_ipv4_addr = 192.168.100.4
mail_ipv4_addr = 192.168.100.52
coffee_ipv4_addr = 192.168.100.20
auth1_ipv4_addr = 192.168.100.117
fs00_ipv4_addr = 192.168.100.35
phosphoric_acid_ipv4_addr = 192.168.100.114
cobalamin_ipv4_addr = 192.168.100.18
uw00_ipv4_addr = 192.168.100.234

100
localhost/main.yml Normal file
View File

@ -0,0 +1,100 @@
---
- hosts: 127.0.0.1
vars:
nfs_modules:
- nfs
- nfsd
- rpcsec_gss_krb5
tasks:
- name: install dependencies
apt:
name: "{{ item }}"
loop:
- lxc
- python3-lxc
- name: enable IPv4 forwarding
replace:
path: /etc/sysctl.conf
regexp: "^#net.ipv4.ip_forward=1$"
replace: "net.ipv4.ip_forward=1"
notify: load sysctl settings
- name: load NFS modules
command: modprobe {{ item }}
loop: "{{ nfs_modules }}"
- name: add NFS modules to /etc/modules
lineinfile:
path: /etc/modules
line: "{{ item }}"
loop: "{{ nfs_modules }}"
- name: create AppArmor profile
copy:
src: templates/lxc-default-with-nfs
dest: /etc/apparmor.d/lxc/lxc-default-with-nfs
notify: restart apparmor
- name: copy /etc/network/interfaces
template:
src: templates/interfaces.j2
dest: /etc/network/interfaces
- name: bring up bridge interfaces
shell: ip link show dev {{ item }} | grep UP || ifup {{ item }}
loop:
- lxcbr1
- lxcbr2
- meta: flush_handlers
- name: create containers
# The --no-validate flag is necessary to work around an attack
# on the SKS keyserver network. See
# https://discuss.linuxcontainers.org/t/3-0-unable-to-fetch-gpg-key-from-keyserver/2015/15
# https://github.com/lxc/lxc/issues/3068
shell: lxc-create -t download -n {{ item }} -- --no-validate -d debian -r buster -a amd64 || true
loop: "{{ groups.containers }}"
- name: install python3 in containers
command:
cmd: chroot /var/lib/lxc/{{ item }}/rootfs sh -c "dpkg -s python3 || echo nameserver 1.1.1.1 > /etc/resolv.conf && apt update && apt install -y python3"
loop: "{{ groups.containers }}"
- name: copy LXC configs
template:
src: templates/lxc_config.j2
dest: /var/lib/lxc/{{ name }}/config
vars:
name: "{{ item }}"
link: "{{ 'lxcbr2' if item == 'outsider' else 'lxcbr1' }}"
loop: "{{ groups.containers }}"
register: results
notify: restart container
- meta: flush_handlers
- name: start each container
shell: lxc-start -n {{ item }} || true
loop: "{{ groups.containers }}"
handlers:
- name: restart apparmor
systemd:
name: apparmor
state: restarted
- name: restart container
shell: lxc-stop -n {{ item.item }}; lxc-start -n {{ item.item }}
loop: "{{ results.results }}"
- name: load sysctl settings
command: sysctl -p
# Everything depends on DNS, so set it up first
- name: set up DNS
import_playbook: ../dns/main.yml
# Next is auth
- name: set up auth1
import_playbook: ../auth1/main.yml
# Next is NFS
- name: set up fs00
import_playbook: ../fs00/main.yml
- name: remount NFS in auth1
hosts: auth1
tasks:
- name: remount /users
shell: umount /users; mount /users
# Run rest of playbooks
- import_playbook: ../coffee/main.yml
- import_playbook: ../mail/main.yml
- import_playbook: ../phosphoric-acid/main.yml
- import_playbook: ../cobalamin/main.yml
- import_playbook: ../outsider/main.yml
- import_playbook: ../uw00/main.yml

View File

@ -0,0 +1,48 @@
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
allow-hotplug {{ ansible_default_ipv4.interface }}
iface {{ ansible_default_ipv4.interface }} inet dhcp
auto lxcbr1
iface lxcbr1 inet static
bridge_ports none
bridge_fd 0
bridge_maxwait 0
address {{ ipv4_gateway }}/24
# Forward connections to lxcbr1 and lxcbr2
up iptables -t nat -C POSTROUTING -s {{ ipv4_subnet }} -d {{ ipv4_subnet }} -j ACCEPT 2>/dev/null || \
iptables -t nat -A POSTROUTING -s {{ ipv4_subnet }} -d {{ ipv4_subnet }} -j ACCEPT
up iptables -t nat -C POSTROUTING -s {{ ipv4_subnet }} -d {{ outsider_ipv4_subnet }} -j ACCEPT 2>/dev/null || \
iptables -t nat -A POSTROUTING -s {{ ipv4_subnet }} -d {{ outsider_ipv4_subnet }} -j ACCEPT
# Masquerade all other connections
up iptables -t nat -C POSTROUTING -s {{ ipv4_subnet }} -j MASQUERADE 2>/dev/null || \
iptables -t nat -A POSTROUTING -s {{ ipv4_subnet }} -j MASQUERADE
down iptables -t nat -D POSTROUTING -s {{ ipv4_subnet }} -d {{ ipv4_subnet }} -j MASQUERADE 2>/dev/null || true
down iptables -t nat -D POSTROUTING -s {{ ipv4_subnet }} -d {{ outsider_ipv4_subnet }} -j MASQUERADE 2>/dev/null || true
down iptables -t nat -D POSTROUTING -s {{ ipv4_subnet }} -j MASQUERADE 2>/dev/null || true
auto lxcbr2
iface lxcbr2 inet static
bridge_ports none
bridge_fd 0
bridge_maxwait 0
address {{ outsider_ipv4_gateway }}/24
# Forward connections to lxcbr1 and lxcbr2
up iptables -t nat -C POSTROUTING -s {{ outsider_ipv4_subnet }} -d {{ ipv4_subnet }} -j ACCEPT 2>/dev/null || \
iptables -t nat -A POSTROUTING -s {{ outsider_ipv4_subnet }} -d {{ ipv4_subnet }} -j ACCEPT
up iptables -t nat -C POSTROUTING -s {{ outsider_ipv4_subnet }} -d {{ outsider_ipv4_subnet }} -j ACCEPT 2>/dev/null || \
iptables -t nat -A POSTROUTING -s {{ outsider_ipv4_subnet }} -d {{ outsider_ipv4_subnet }} -j ACCEPT
# Masquerade all other connections
up iptables -t nat -C POSTROUTING -s {{ outsider_ipv4_subnet }} -j MASQUERADE 2>/dev/null || \
iptables -t nat -A POSTROUTING -s {{ outsider_ipv4_subnet }} -j MASQUERADE
down iptables -t nat -D POSTROUTING -s {{ outsider_ipv4_subnet }} -d {{ ipv4_subnet }} -j MASQUERADE 2>/dev/null || true
down iptables -t nat -D POSTROUTING -s {{ outsider_ipv4_subnet }} -d {{ outsider_ipv4_subnet }} -j MASQUERADE 2>/dev/null || true
down iptables -t nat -D POSTROUTING -s {{ outsider_ipv4_subnet }} -j MASQUERADE 2>/dev/null || true

View File

@ -0,0 +1,9 @@
profile lxc-container-default-with-nfs flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/lxc/container-base>
deny mount fstype=devpts,
mount fstype=cgroup -> /sys/fs/cgroup/**,
mount fstype=cgroup2 -> /sys/fs/cgroup/**,
mount fstype=nfs*,
mount fstype=rpc_pipefs,
}

View File

@ -0,0 +1,24 @@
# Template used to create this container: /usr/share/lxc/templates/lxc-download
# Parameters passed to the template: -d debian -r buster -a amd64
# For additional config options, please look at lxc.container.conf(5)
# Uncomment the following line to support nesting containers:
#lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)
# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = linux64
# Container specific configuration
lxc.rootfs.path = dir:/var/lib/lxc/{{ name }}/rootfs
lxc.uts.name = {{ name }}
lxc.apparmor.profile = lxc-container-default-with-nfs
lxc.start.auto = 1
# Network configuration
lxc.net.0.type = veth
lxc.net.0.link = {{ link }}
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx

View File

@ -93,13 +93,7 @@ The playbook will install Mailman 2 but it will be disabled, since the CSC
has been using Mailman 3 since April 2021.
If you need to use Mailman 2 for whatever reason (e.g. to test a migration),
simply run the mailman2 playbook:
```
ansible-playbook mailman2/mailman2.yml
```
Note that if any mailing lists exist in both Mailman 2 and Mailman 3 at this
point, Mailman 2 will take precedence.
Make sure you run the mailman3 playbook afterwards to disable Mailman 2:
```
ansible-playbook mailman3/mailman3.yml
```
you will need to edit the Postfix config (specifically, the `alias_maps`
settings in /etc/postfix/main.cf). You will also need to unmask and re-enable
mailman.service. See the Mailman 2 playbook, as well
as the official documentation for Mailman 2 and Postfix, for more details.

View File

@ -19,3 +19,11 @@
systemd:
name: spamassassin
state: restarted
- name: restart cron
systemd:
name: cron
state: restarted
- name: restart mailman3-web
systemd:
name: mailman3-web
state: restarted

View File

@ -1,54 +0,0 @@
---
- hosts: mail
vars:
# use this password for all mailing lists
list_password: mailman
tasks:
- name: install packages for Mailman 2
apt:
name: "{{ item }}"
state: present
loop:
- mailman
- name: add Mailman config
template:
src: mm_cfg.py.j2
dest: /etc/mailman/mm_cfg.py
- name: create Mailman aliases file
command:
chdir: /var/lib/mailman
cmd: bin/genaliases
creates: /var/lib/mailman/data/aliases
- name: create initial list
shell:
chdir: /var/lib/mailman
cmd: "bin/newlist -a mailman root@{{ base_domain }} {{ list_password }} || true"
- name: add Mailman aliases to Postfix config
lineinfile:
path: /etc/postfix/main.cf
regexp: "^alias_maps = .*$"
line: "alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases"
notify: reload Postfix
- name: add Apache config
template:
src: mailman.conf.j2
dest: /etc/apache2/sites-available/mailman.conf
notify: reload Apache
- name: enable Mailman site
command:
cmd: a2ensite mailman.conf
creates: /etc/apache2/sites-enabled/mailman.conf
notify: reload Apache
- name: enable CGI on Apache
command:
cmd: a2enmod cgid
creates: /etc/apache2/mods-enabled/cgid.load
notify: restart Apache
- name: restart Mailman 2
systemd:
name: mailman
state: restarted
ignore_errors: yes
handlers:
- name: _imports
import_tasks: ../common.yml

View File

@ -0,0 +1,45 @@
- name: install packages for Mailman 2
apt:
name: "{{ item }}"
state: present
loop:
- mailman
- name: add Mailman config
template:
src: mailman2/templates/mm_cfg.py.j2
dest: /etc/mailman/mm_cfg.py
- name: create Mailman aliases file
command:
chdir: /var/lib/mailman
cmd: bin/genaliases
creates: /var/lib/mailman/data/aliases
- name: create initial list
shell:
chdir: /var/lib/mailman
cmd: "bin/newlist -a mailman root@{{ base_domain }} {{ list_password }} || true"
- name: add Mailman aliases to Postfix config
lineinfile:
path: /etc/postfix/main.cf
regexp: "^alias_maps = .*$"
line: "alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases"
notify: reload Postfix
- name: add Apache config
template:
src: mailman2/templates/mailman.conf.j2
dest: /etc/apache2/sites-available/mailman.conf
notify: reload Apache
- name: enable Mailman site
command:
cmd: a2ensite mailman.conf
creates: /etc/apache2/sites-enabled/mailman.conf
notify: reload Apache
- name: enable CGI on Apache
command:
cmd: a2enmod cgid
creates: /etc/apache2/mods-enabled/cgid.load
notify: restart Apache
- name: restart Mailman 2
systemd:
name: mailman
state: restarted
ignore_errors: yes

View File

@ -0,0 +1,2 @@
# use this password for all mailing lists
list_password: mailman

View File

@ -1,209 +0,0 @@
- hosts: coffee
tasks:
- name: setup the database on coffee
command:
cmd: mysql
stdin: |
CREATE DATABASE IF NOT EXISTS {{ item }};
CREATE USER IF NOT EXISTS {{ item }} IDENTIFIED BY '{{ item }}';
GRANT ALL PRIVILEGES ON {{ item }}.* TO {{ item }};
loop:
- mailman3
- mailman3web
- hosts: mail
tasks:
- name: install Mailman 3 prerequisites
apt:
name: "{{ item }}"
loop:
- python3-pip
- python3-dev
- python3-xapian
- virtualenv
- uwsgi
- uwsgi-plugin-python3
- default-libmysqlclient-dev
- sassc
- lynx
- git
- memcached
- name: override systemd services
import_role:
name: ../../roles/systemd_workarounds
vars:
services: [ "memcached", "logrotate" ]
- name: upgrade pip
pip:
executable: pip3
name: pip
extra_args: --upgrade
- name: create mailman3 directory
file:
path: /opt/mailman3
state: directory
owner: list
group: list
mode: '2755'
- name: create mailman3-web directory
file:
path: /opt/mailman3/web
state: directory
owner: www-data
group: www-data
- name: install pip packages
become_user: list
pip:
virtualenv: /opt/mailman3
virtualenv_python: python3
virtualenv_site_packages: yes
name: "{{ item }}"
loop:
- mysqlclient
- pylibmc
- git+https://github.com/notanumber/xapian-haystack.git
- mailman
- mailman-web
- mailman-hyperkitty
- name: find the site packages directory in the virtualenv
find:
paths: /opt/mailman3/lib
patterns: "python3*"
file_type: directory
register: find_ret
# This is necessary because python3-xapian was installed globally
- name: make sure that global site packages are inherited
file:
name: "{{ item.path }}/no-global-site-packages.txt"
state: absent
loop: "{{ find_ret.files }}"
- name: create mailman3 folder
file:
path: /etc/mailman3
state: directory
mode: 0755
- name: add Mailman 3 configs
template:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
group: "{{ item.group }}"
mode: 0640
loop:
- src: mailman.cfg.j2
dest: /etc/mailman3/mailman.cfg
group: list
- src: mailman-hyperkitty.cfg.j2
dest: /etc/mailman3/mailman-hyperkitty.cfg
group: list
- src: settings.py.j2
dest: /etc/mailman3/settings.py
group: www-data
- src: urls.py
dest: /etc/mailman3/urls.py
group: www-data
- src: uwsgi.ini
dest: /etc/mailman3/uwsgi.ini
group: www-data
- name: update cron log level
lineinfile:
path: /etc/default/cron
line: 'EXTRA_OPTS="-L 4"'
notify: restart cron
- name: add new services
copy:
src: "{{ item }}.service"
dest: "/etc/systemd/system/{{ item }}.service"
loop:
- mailman3
- mailman3-web
register: service_files
notify:
- reload systemd
- meta: flush_handlers
- name: stop and mask Mailman 2
systemd:
name: mailman
state: stopped
masked: yes
- name: enable mod_proxy_uwsgi
command:
cmd: a2enmod proxy_uwsgi
creates: /etc/apache2/mods-enabled/proxy_uwsgi.load
notify: restart Apache
- name: update Apache config
template:
src: mailman.conf.j2
dest: /etc/apache2/sites-available/mailman.conf
notify: reload Apache
- name: disable Mailman 2 in Postfix main.cf
lineinfile:
path: /etc/postfix/main.cf
regexp: "^alias_maps = .*$"
line: "alias_maps = hash:/etc/aliases"
notify: reload Postfix
- name: update Postfix config
blockinfile:
path: /etc/postfix/main.cf
block: |
owner_request_special = no
transport_maps = hash:/opt/mailman3/data/postfix_lmtp
local_recipient_maps =
proxy:unix:passwd.byname,
$alias_maps,
hash:/opt/mailman3/data/postfix_lmtp
notify: reload Postfix
- name: disable Mailman 2 cron jobs
replace:
path: /etc/cron.d/mailman
regexp: "^([*\\d@].*)$"
replace: "### \\1"
- name: check if mailman3-web setup was already done
slurp:
src: /opt/mailman3/web/setup-done
register: setup_done
ignore_errors: yes
- name: run one-time mailman3-web setup
become_user: www-data
shell:
executable: /bin/bash
chdir: /opt/mailman3
cmd: |
set -e
source bin/activate
mailman-web migrate
mailman-web collectstatic --no-input
mailman-web compress
echo -n 1 > web/setup-done
when: "'content' not in setup_done or (setup_done.content | b64decode) != '1'"
notify:
- restart mailman3-web
- name: enable and start new services
systemd:
name: "{{ item }}"
enabled: true
state: started
loop:
- mailman3
- mailman3-web
- name: add cron jobs
copy:
src: "{{ item }}.cron"
dest: "/etc/cron.d/{{ item }}"
loop:
- mailman3
- mailman3-web
- meta: flush_handlers
- name: create csc-general list
become_user: list
shell:
cmd: /opt/mailman3/bin/mailman create csc-general@{{ base_domain }} || true
handlers:
- name: _imports
import_tasks: ../common.yml
- name: restart cron
systemd:
name: cron
state: restarted
- name: restart mailman3-web
systemd:
name: mailman3-web
state: restarted

View File

@ -0,0 +1,10 @@
- name: setup the database on coffee
command:
cmd: mysql
stdin: |
CREATE DATABASE IF NOT EXISTS {{ item }};
CREATE USER IF NOT EXISTS {{ item }} IDENTIFIED BY '{{ item }}';
GRANT ALL PRIVILEGES ON {{ item }}.* TO {{ item }};
loop:
- mailman3
- mailman3web

View File

@ -0,0 +1,201 @@
- name: install Mailman 3 prerequisites
apt:
name:
- python3-pip
- python3-dev
- python3-xapian
- virtualenv
- uwsgi
- uwsgi-plugin-python3
- default-libmysqlclient-dev
- sassc
- lynx
- git
- memcached
- name: override systemd services
import_role:
name: ../roles/systemd_workarounds
vars:
services: [ "memcached", "logrotate" ]
- name: upgrade pip
pip:
executable: pip3
name: pip
extra_args: --upgrade
- name: create mailman3 directory
file:
path: /opt/mailman3
state: directory
owner: list
group: list
mode: '2755'
- name: create mailman3-web directory
file:
path: /opt/mailman3/web
state: directory
owner: www-data
group: www-data
- name: install pip packages
become_user: list
pip:
virtualenv: /opt/mailman3
virtualenv_python: python3
virtualenv_site_packages: yes
name: "{{ item }}"
loop:
- mysqlclient
- pylibmc
- git+https://github.com/notanumber/xapian-haystack.git
- mailman
- mailman-web
- mailman-hyperkitty
- name: find the site packages directory in the virtualenv
find:
paths: /opt/mailman3/lib
patterns: "python3*"
file_type: directory
register: find_ret
# This is necessary because python3-xapian was installed globally
- name: make sure that global site packages are inherited
file:
name: "{{ item.path }}/no-global-site-packages.txt"
state: absent
loop: "{{ find_ret.files }}"
- name: create mailman3 folder
file:
path: /etc/mailman3
state: directory
mode: 0755
- name: create mailman3 log folder
file:
path: /var/log/mailman3
state: directory
owner: list
group: list
- name: create mailman3-web log folder
file:
path: /var/log/mailman3/web
state: directory
owner: www-data
group: www-data
- name: create mailman3-web.log
file:
path: /var/log/mailman3/web/mailman3-web.log
state: touch
owner: www-data
group: www-data
- name: add Mailman 3 configs
template:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
group: "{{ item.group }}"
mode: 0640
loop:
- src: mailman3/templates/mailman.cfg.j2
dest: /etc/mailman3/mailman.cfg
group: list
- src: mailman3/templates/mailman-hyperkitty.cfg.j2
dest: /etc/mailman3/mailman-hyperkitty.cfg
group: list
- src: mailman3/templates/settings.py.j2
dest: /etc/mailman3/settings.py
group: www-data
- src: mailman3/templates/urls.py
dest: /etc/mailman3/urls.py
group: www-data
- src: mailman3/templates/uwsgi.ini
dest: /etc/mailman3/uwsgi.ini
group: www-data
- name: update cron log level
lineinfile:
path: /etc/default/cron
line: 'EXTRA_OPTS="-L 4"'
notify: restart cron
- name: add new services
copy:
src: mailman3/templates/{{ item }}.service
dest: "/etc/systemd/system/{{ item }}.service"
loop:
- mailman3
- mailman3-web
register: service_files
notify:
- reload systemd
- meta: flush_handlers
- name: stop and mask Mailman 2
systemd:
name: mailman
state: stopped
masked: yes
- name: enable mod_proxy_uwsgi
command:
cmd: a2enmod proxy_uwsgi
creates: /etc/apache2/mods-enabled/proxy_uwsgi.load
notify: restart Apache
- name: update Apache config
template:
src: mailman3/templates/mailman.conf.j2
dest: /etc/apache2/sites-available/mailman.conf
notify: reload Apache
- name: disable Mailman 2 in Postfix main.cf
lineinfile:
path: /etc/postfix/main.cf
regexp: "^alias_maps = .*$"
line: "alias_maps = hash:/etc/aliases"
notify: reload Postfix
- name: update Postfix config
blockinfile:
path: /etc/postfix/main.cf
block: |
owner_request_special = no
transport_maps = hash:/opt/mailman3/data/postfix_lmtp
local_recipient_maps =
proxy:unix:passwd.byname,
$alias_maps,
hash:/opt/mailman3/data/postfix_lmtp
notify: reload Postfix
- name: disable Mailman 2 cron jobs
replace:
path: /etc/cron.d/mailman
regexp: "^([*\\d@].*)$"
replace: "### \\1"
- name: check if mailman3-web setup was already done
slurp:
src: /opt/mailman3/web/setup-done
register: setup_done
ignore_errors: yes
- name: run one-time mailman3-web setup
become_user: www-data
shell:
executable: /bin/bash
chdir: /opt/mailman3
cmd: |
set -e
source bin/activate
mailman-web migrate
mailman-web collectstatic --no-input
mailman-web compress
echo -n 1 > web/setup-done
when: "'content' not in setup_done or (setup_done.content | b64decode) != '1'"
notify:
- restart mailman3-web
- name: enable and start new services
systemd:
name: "{{ item }}"
enabled: true
state: started
loop:
- mailman3
- mailman3-web
- name: add cron jobs
copy:
src: mailman3/templates/{{ item }}.cron
dest: "/etc/cron.d/{{ item }}"
loop:
- mailman3
- mailman3-web
- meta: flush_handlers
- name: create csc-general list
become_user: list
shell:
cmd: /opt/mailman3/bin/mailman create csc-general@{{ base_domain }} || true

View File

@ -98,12 +98,28 @@
path: /etc/systemd/system/mailman3.service
register: mailman3_unit
handlers:
- name: _imports
import_tasks: common.yml
- import_tasks: common.yml
- name: run Mailman 2 play
import_playbook: mailman2/mailman2.yml
when: not mailman3_unit.stat.exists
- hosts: mail
tasks:
- name: import Mailman 2 role
import_role:
name: mailman2/
when: not mailman3_unit.stat.exists
handlers:
- import_tasks: common.yml
- name: run Mailman 3 play
import_playbook: mailman3/mailman3.yml
- hosts: coffee
tasks:
- name: create databases on coffee for Mailman 3
import_tasks: mailman3/tasks/database.yml
handlers:
- import_tasks: common.yml
- hosts: mail
tasks:
- name: import Mailman 3 role
import_role:
name: mailman3/
handlers:
- import_tasks: common.yml

View File

@ -6,7 +6,7 @@
name: ../roles/network_setup
vars:
ipv4_addr: "{{ outsider_ipv4_addr }}"
ipv4_gateway: "{{ host_ipv4_addr }}"
ipv4_gateway: "{{ outsider_ipv4_gateway }}"
- name: add local users
import_role:
name: ../roles/local_users

View File

@ -3,6 +3,7 @@
name: rpc-gssd
state: restarted
- name: mount all
command:
cmd: mount -a
shell:
# sometimes you gotta do it twice
cmd: mount -a; mount -a
warn: false