add Kerberized NFS

This commit is contained in:
Max Erenberg 2021-06-18 01:09:45 -04:00
parent b3cc8f933f
commit 9d28bb06a2
18 changed files with 312 additions and 17 deletions

View File

@ -67,9 +67,12 @@ user "ctdalek". You should also be able run sudo, since sudo is configured
via LDAP. Try SSH'ing into some of the other containers using your
Kerberos ticket; you should not be prompted for your password.
### Side note
I've noticed that none of the containers can SSH into auth1 via GSSAPI.
I've also noticed that sudo doesn't work in auth1 via lxc-attach
(it does work with lxc-console, though). Not sure if those are related.
Anyways, if you're having the same problem and you figure out a solution,
please document it here.
If you want to see the keytab entries on a particular host:
```
klist -e -k /etc/krb5.keytab
```
## DNS
It is important for each host to have a PTR record, otherwise SSH GSSAPI
authentication will fail. The most recent version of the DNS playbook
should have PTR records for each host.

View File

@ -6,6 +6,9 @@
name: ../roles/network_setup
vars:
ipv4_addr: "{{ auth1_ipv4_addr }}"
- name: mount NFS
import_role:
name: ../roles/nfs_setup
# LDAP
- name: install LDAP packages
apt:

11
cobalamin/README.md Normal file
View File

@ -0,0 +1,11 @@
## cobalamin
This container's sole purpose is to demonstrate Kerberized NFS,
i.e. mount the /users directory with sec=krb5p.
If you have trouble mounting the NFS share the first time you run the
playbook, try restarting the container.
Observe how you must login as a user with a password to "unlock" that user's
home directory. Even if you are root and you switch to that user using `su`,
you will not be able to access their files since you do not have a Kerberos
ticket for that user.

16
cobalamin/main.yml Normal file
View File

@ -0,0 +1,16 @@
---
- hosts: cobalamin
tasks:
- name: setup networking
import_role:
name: ../roles/network_setup
vars:
ipv4_addr: "{{ cobalamin_ipv4_addr }}"
- name: setup auth
import_role:
name: ../roles/auth_setup
- name: setup NFS
import_role:
name: ../roles/nfs_setup
vars:
sec: krb5p

View File

@ -9,6 +9,9 @@
- name: setup auth
import_role:
name: ../roles/auth_setup
- name: setup NFS
import_role:
name: ../roles/nfs_setup
- name: install MariaDB
apt:
name: default-mysql-server

View File

@ -2,13 +2,32 @@ no-hosts
no-resolv
server={{ upstream_dns }}
interface=eth0
# We need a hosts file to use CNAMEs
addn-hosts=/etc/dnsmasq_hosts
# dnsmasq creates PTR records automatically for entries in the hosts file,
# so we don't include them again (auth1 and mail)
address=/dns.{{ base_domain }}/{{ dns_ipv4_addr }}
ptr-record={{ dns_ipv4_addr.split('.') | reverse | join('.') }}.in-addr.arpa.,"dns.{{ base_domain }}"
address=/mail.{{ base_domain }}/{{ mail_ipv4_addr }}
cname=mailman.{{ base_domain }},mail.{{ base_domain }}
mx-host={{ base_domain }},mail.{{ base_domain }},50
address=/coffee.{{ base_domain }}/{{ coffee_ipv4_addr }}
ptr-record={{ coffee_ipv4_addr.split('.') | reverse | join('.') }}.in-addr.arpa.,"coffee.{{ base_domain }}"
address=/auth1.{{ base_domain }}/{{ auth1_ipv4_addr }}
cname=ldap1.{{ base_domain }},auth1.{{ base_domain }}
cname=kdc1.{{ base_domain }},auth1.{{ base_domain }}
cname=kadmin.{{ base_domain }},auth1.{{ base_domain }}
address=/fs00.{{ base_domain }}/{{ fs00_ipv4_addr }}
ptr-record={{ fs00_ipv4_addr.split('.') | reverse | join('.') }}.in-addr.arpa.,"fs00.{{ base_domain }}"
address=/phosphoric-acid.{{ base_domain }}/{{ phosphoric_acid_ipv4_addr }}
ptr-record={{ phosphoric_acid_ipv4_addr.split('.') | reverse | join('.') }}.in-addr.arpa.,"phosphoric-acid.{{ base_domain }}"
address=/cobalamin.{{ base_domain }}/{{ cobalamin_ipv4_addr }}
ptr-record={{ cobalamin_ipv4_addr.split('.') | reverse | join('.') }}.in-addr.arpa.,"cobalamin.{{ base_domain }}"

70
fs00/README.md Normal file
View File

@ -0,0 +1,70 @@
## fs00
This container is meant to emulate the NetApp which exports the /users
directory to the other CSC servers via NFS. Unfortunately we can't
run the real NetApp software inside our container because NetApp uses
a proprietary operating system (ONTAP). So we're going to use the
nfs-kernel-server program instead.
Since NFS runs in the kernel, we're going to need to need some kernel
modules loaded on the LXC host (i.e. the VM). Run the following in
the VM:
```
modprobe nfs
modprobe nfsd
modprobe rpcsec_gss_krb5
```
To make sure these modules automatically get loaded at boot time, add the
following to /etc/modules:
```
nfs
nfsd
rpcsec_gss_krb5
```
We're not ready to start the fs00 container yet - we still need some tweaks
to AppArmor.
## AppArmor
Unfortunately AppArmor does not allow containers to mount NFS shares by
default, so we will create a new AppArmor profile.
Create a new file /etc/apparmor.d/lxc/lxc-default-with-nfs and paste the
following into it:
```
profile lxc-container-default-with-nfs flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/lxc/container-base>
deny mount fstype=devpts,
mount fstype=cgroup -> /sys/fs/cgroup/**,
mount fstype=cgroup2 -> /sys/fs/cgroup/**,
mount fstype=nfs*,
mount fstype=rpc_pipefs,
}
```
Then run:
```
systemctl reload apparmor
```
(Source: [here](https://unix.stackexchange.com/questions/450308/how-to-allow-specific-proxmox-lxc-containers-to-mount-nfs-shares-on-the-network).)
Next, for each container which you have already created (including fs00), add the
following line to its LXC config (should be /var/lib/lxc/container\_name/config):
```
lxc.apparmor.profile = lxc-container-default-with-nfs
```
You will need to restart each container for these changes to take effect.
Also, add this line to /etc/lxc/default.conf so that newly created containers
will have it by default.
Now we are ready to start the fs00 container.
## NFS clients
Once you have re-run the playbooks for each of the other containers, run `mount`
to make sure that /users was mounted correctly. Only phosphoric-acid should
have root access in /users.
## NFS resources
* https://wiki.debian.org/NFSServerSetup
* https://linux.die.net/man/5/exports
* https://linux-nfs.org/wiki/index.php/Main\_Page
* http://nfs.sourceforge.net/nfs-howto/

111
fs00/main.yml Normal file
View File

@ -0,0 +1,111 @@
---
- hosts: fs00
# TODO: add more users
vars:
users:
- ctdalek
- regular1
tasks:
- name: setup networking
import_role:
name: ../roles/network_setup
vars:
ipv4_addr: "{{ fs00_ipv4_addr }}"
- name: install NFS packages
apt:
name: "{{ item }}"
loop:
- nfs-kernel-server
- rpcbind
# TODO: put this in an Ansible role
- name: install LDAP packages
apt:
name: "{{ item }}"
loop:
- libnss-ldapd
- ldap-utils
- name: stop and disable nscd
systemd:
name: nscd
state: stopped
enabled: no
- name: copy ldap.conf
template:
src: ../auth1/ldap/ldap.conf.j2
dest: /etc/ldap/ldap.conf
notify:
- restart nslcd
- name: add member->uniqueMember map
lineinfile:
line: map group member uniqueMember
path: /etc/nslcd.conf
notify: restart nslcd
- name: copy nsswitch.conf
copy:
src: ../auth1/ldap/nsswitch.conf
dest: /etc/nsswitch.conf
notify: restart nslcd
- name: create /users directory
file:
path: /users
state: directory
mode: 0755
- name: create skel directory
file:
path: /users/skel
state: directory
mode: 0755
- name: add files to skel directory
copy:
src: "{{ item }}"
dest: /users/skel/
with_fileglob:
- "/etc/skel/.*"
- meta: flush_handlers
- name: create home directories for users
shell:
cmd: |
mkdir -p /users/{{ item }}
cp /users/skel/.* /users/{{ item }}/
chown -R {{ item }}:{{ item }} /users/{{ item }}
warn: false
loop: "{{ users }}"
- name: export /users directory
lineinfile:
path: /etc/exports
line: >-
/users {{ ipv4_subnet }}(sec=sys,rw) phosphoric-acid.{{ base_domain }}(sec=sys,rw,no_root_squash) cobalamin.{{ base_domain }}(sec=krb5p,rw)
notify:
- export all
- restart nfs-server
- name: disable NFSv4
# see https://unix.stackexchange.com/questions/205403/disable-nfsv4-server-on-debian-allow-nfsv3/289324
replace:
path: /etc/default/nfs-kernel-server
regexp: '^RPCNFSDCOUNT=.*$'
replace: 'RPCNFSDCOUNT="8 --no-nfs-version 4"'
notify:
- restart nfs-server
- name: install Kerberos packages
apt:
name: krb5-user
- name: add NFS server principal
command:
cmd: kadmin -p sysadmin/admin
stdin: |
krb5
addprinc -randkey nfs/{{ ansible_fqdn }}
ktadd nfs/{{ ansible_fqdn }}
creates: /etc/krb5.keytab
notify: restart nfs-server
handlers:
- name: export all
command: exportfs -ra
- name: restart nfs-server
systemd:
name: nfs-server
state: restarted
- name: restart nslcd
systemd:
name: nslcd
state: restarted

View File

@ -1,9 +1,12 @@
[containers]
dns ansible_lxc_host=dns
mail ansible_lxc_host=mail
coffee ansible_lxc_host=coffee
auth1 ansible_lxc_host=auth1
outsider ansible_lxc_host=outsider
dns ansible_lxc_host=dns
mail ansible_lxc_host=mail
coffee ansible_lxc_host=coffee
auth1 ansible_lxc_host=auth1
fs00 ansible_lxc_host=fs00
phosphoric-acid ansible_lxc_host=phosphoric-acid
cobalamin ansible_lxc_host=cobalamin
outsider ansible_lxc_host=outsider
[containers:vars]
ansible_connection = lxc
@ -11,7 +14,6 @@ ansible_python_interpreter = python3
base_domain = csclub.internal
ldap_base = "{{ base_domain.split('.') | map('regex_replace', '^(.*)$', 'dc=\\1') | join(',') }}"
krb_realm = "{{ base_domain.upper() }}"
csc_hosts = ["dns", "mail", "coffee", "auth1"]
# the subnet for the containers
ipv4_subnet = 192.168.100.0/24
@ -33,8 +35,11 @@ host_ipv4_addr = 192.168.100.1
# matter, just make sure it is in a different subnet from the others.
# Make sure to update the IP addresses of the other containers is in the
# ipv4_subnet which you specified above.
outsider_ipv4_addr = 192.168.101.2
dns_ipv4_addr = 192.168.100.4
mail_ipv4_addr = 192.168.100.52
coffee_ipv4_addr = 192.168.100.20
auth1_ipv4_addr = 192.168.100.117
outsider_ipv4_addr = 192.168.101.2
dns_ipv4_addr = 192.168.100.4
mail_ipv4_addr = 192.168.100.52
coffee_ipv4_addr = 192.168.100.20
auth1_ipv4_addr = 192.168.100.117
fs00_ipv4_addr = 192.168.100.35
phosphoric_acid_ipv4_addr = 192.168.100.114
cobalamin_ipv4_addr = 192.168.100.18

View File

@ -9,6 +9,9 @@
- name: setup auth
import_role:
name: ../roles/auth_setup
- name: setup NFS
import_role:
name: ../roles/nfs_setup
- name: install packages for email server
apt:
name: "{{ item }}"

View File

@ -0,0 +1,2 @@
## phosphoric-acid
phosphoric-acid has root mounted to root on the NFS server (the "no_root_squash" option). Therefore, you can create files owned by root, change file ownerships, etc.

14
phosphoric-acid/main.yml Normal file
View File

@ -0,0 +1,14 @@
---
- hosts: phosphoric-acid
tasks:
- name: setup networking
import_role:
name: ../roles/network_setup
vars:
ipv4_addr: "{{ phosphoric_acid_ipv4_addr }}"
- name: setup auth
import_role:
name: ../roles/auth_setup
- name: setup NFS
import_role:
name: ../roles/nfs_setup

View File

@ -36,6 +36,7 @@
krb5
addprinc -randkey host/{{ ansible_fqdn }}
ktadd host/{{ ansible_fqdn }}
ktremove host/{{ ansible_fqdn }} old
when: ansible_host != 'auth1'
- name: add ssh config files
copy:

View File

@ -16,3 +16,9 @@
dest: /etc/resolv.conf
when: ansible_host != 'dns'
- meta: flush_handlers
- name: re-run the setup module to gather facts
setup:
- name: assert FQDN is correct
assert:
that:
- ansible_fqdn == ansible_hostname + "." + base_domain

View File

@ -0,0 +1,8 @@
- name: restart rpc-gssd
systemd:
name: rpc-gssd
state: restarted
- name: mount all
command:
cmd: mount -a
warn: false

View File

@ -0,0 +1,16 @@
- name: install nfs-common
apt:
name: nfs-common
- name: create /users directory
file:
path: /users
state: directory
- name: add fstab entry
lineinfile:
path: /etc/fstab
line: >-
fs00.csclub.internal:/users /users nfs bg,vers=3,sec={{ sec }},nosuid,nodev 0 0
notify:
- restart rpc-gssd
- mount all
- meta: flush_handlers

View File

@ -0,0 +1 @@
sec: sys

View File

@ -17,6 +17,9 @@
InaccessibleDirectories=
ReadOnlyDirectories=
ReadWriteDirectories=
InaccessiblePaths=
ReadOnlyPaths=
ReadWritePaths=
dest: "/etc/systemd/system/{{ item }}.service.d/override.conf"
loop: "{{ services }}"
register: service_overrides