# Mirror Env This script helps quickly set up a vm development environment for cs club's mirrors. ## Installation Install the following dependencies - qemu - genisoimage - ovmf (find the location of OVMF\_CODE.fd, it is system dependent) Edit the configuration variables in the script file to reflect your system. The `mirror-vm` script will will automatically install ubuntu using the [autoinstall](https://ubuntu.com/server/docs/install/autoinstall) feature. ``` curl -O https://releases.ubuntu.com/20.04/ubuntu-20.04.2-live-server-amd64.iso $ ./mirror-vm create $ ./mirror-vm run-install ``` To view the vm, you also need a vnc viewer. [TigerVNC](https://github.com/TigerVNC/tigervnc) is a good choice. Simply run ``` $ vncviewer :5900 ``` Due to the way the ubuntu autoinstall is designed, user confirmation is required to start the autoinstallation process. Enter `yes` when prompted with this line: ``` Continue with autoinstall (yes|no) ``` After going through the install process you can run the vm with ``` $ ./mirror-vm run ``` The default login user has ``` username: ubuntu password: ubuntu ``` ## Post-Installation The ubunutu autoinstall can only handle basic installation. We require a more powerful tool to configure the post-install environment. For this reason, we will be using ansible. First, install ansible and sshpass. Perform all the following commands in the `post-install/` directory. Check that ansible can talk to the vm: ``` $ ansible -m ping all ``` We can now complete the rest of the post-install with ``` $ ansible-playbook -K playbook.yml ``` ## System Details For those that are interested. relevant lsblk output: ``` NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 10G 0 disk ├─vda1 252:1 0 500M 0 part /boot/efi └─vda2 252:2 0 9G 0 part └─md0 9:0 0 9G 0 raid1 └─vg0-root 253:0 0 8G 0 lvm / vdb 252:16 0 10G 0 disk ├─vdb1 252:17 0 500M 0 part └─vdb2 252:18 0 9G 0 part └─md0 9:0 0 9G 0 raid1 └─vg0-root 253:0 0 8G 0 lvm / vdc 252:32 0 10G 0 disk ├─vdc1 252:33 0 10G 0 part └─vdc9 252:41 0 8M 0 part vdd 252:48 0 10G 0 disk ├─vdd1 252:49 0 10G 0 part └─vdd9 252:57 0 8M 0 part vde 252:64 0 10G 0 disk ├─vde1 252:65 0 10G 0 part └─vde9 252:73 0 8M 0 part vdf 252:80 0 10G 0 disk ├─vdf1 252:81 0 10G 0 part └─vdf9 252:89 0 8M 0 part ``` Drives vda and vdb are for the main filesystem, they use raid1. Drives vdc, vdd, vde and vdf are in a raidz2 zpool.