mirror-env/README.md

123 lines
3.2 KiB
Markdown

# Mirror Env
This repo helps quickly set up a vm development environment for cs club's
mirrors using ansible.
There are three options for getting the mirror dev vm running:
- qemu
- libvirt
- manual
Follow the relevant instructions below.
## VM Installation Option 1 (qmeu)
Install the following:
- ansible
- qemu
- genisoimage
- ovmf (find the location of OVMF\_CODE.fd, it is system dependent)
Before doing anything else, edit the config files in `group_vars/` to your
system's needs. For qemu installation specifically, you need to provide the
location of your `OVMF_CODE.fd` file.
To begin the setup process, in this repo's root, run:
```
$ ansible-playbook -K qemu/main.yml
```
Due to the way the ubuntu autoinstall is designed, user confirmation is
required to start the autoinstallation process. To view the vm, you also need a
vnc viewer. [TigerVNC](https://github.com/TigerVNC/tigervnc) is a good choice.
Simply run
```
$ vncviewer :5900
```
Enter `yes` when prompted with
this line:
```
Continue with autoinstall (yes|no)
```
Once the installation is complete, you can run the vm using:
```
$ ansible-playbook qemu/run/yml
```
The default login user has
```
username: ubuntu
password: ubuntu
```
## VM Install Option 2 (libvirt)
### Install Packages (debian)
**needs update**
```
$ apt install qemu-kvm libvirt-daemon virt-manager virt-viewer ansible cloud-image-utils
qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils virtinst libvirt-daemon virt-manager
```
### Install Packages (archlinux)
**needs update**
```
$ pacman -S qemu libvirt virt-install virt-viewer ansible
```
### Run Playbook
```
ansible-playbook libvirt/main.yml
```
First, install ansible and sshpass. Perform all the following commands in the `post-install/` directory.
Also install the extra roles
```
$ ansible-galaxy install -r requirements.yml
```
Check that ansible can talk to the vm:
```
$ ansible -m ping all
```
We can now complete the rest of the post-install with
```
$ ansible-playbook -K playbook.yml
```
## VM Install Option 3 (manual)
## System Details
Further system information for those that are interested.
```
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 500M 0 part /boot/efi
└─vda2 252:2 0 9G 0 part
└─md0 9:0 0 9G 0 raid1
└─vg0-root 253:0 0 8G 0 lvm /
vdb 252:16 0 10G 0 disk
├─vdb1 252:17 0 500M 0 part
└─vdb2 252:18 0 9G 0 part
└─md0 9:0 0 9G 0 raid1
└─vg0-root 253:0 0 8G 0 lvm /
vdc 252:32 0 10G 0 disk
├─vdc1 252:33 0 10G 0 part
└─vdc9 252:41 0 8M 0 part
vdd 252:48 0 10G 0 disk
├─vdd1 252:49 0 10G 0 part
└─vdd9 252:57 0 8M 0 part
vde 252:64 0 10G 0 disk
├─vde1 252:65 0 10G 0 part
└─vde9 252:73 0 8M 0 part
vdf 252:80 0 10G 0 disk
├─vdf1 252:81 0 10G 0 part
└─vdf9 252:89 0 8M 0 part
```
Drives vda and vdb are for the main filesystem, they use raid1. Drives vdc, vdd, vde and vdf are in a raidz2 zpool.