3.2 KiB

Mirror Env

This script helps quickly set up a vm development environment for cs club's mirrors.

VM Installation Option 1 (qmeu-vm.yml)

instructions need to be rewritten

Install the following dependencies

  • qemu
  • genisoimage
  • ovmf (find the location of OVMF_CODE.fd, it is system dependent)

Edit the configuration variables in the script file to reflect your system.

The mirror-vm script will will automatically install ubuntu using the autoinstall feature.

curl -O https://releases.ubuntu.com/20.04/ubuntu-20.04.2-live-server-amd64.iso
$ ./mirror-vm create
$ ./mirror-vm run-install

To view the vm, you also need a vnc viewer. TigerVNC is a good choice. Simply run

$ vncviewer :5900

Due to the way the ubuntu autoinstall is designed, user confirmation is required to start the autoinstallation process. Enter yes when prompted with this line:

Continue with autoinstall (yes|no)

After going through the install process you can run the vm with

$ ./mirror-vm run

The default login user has

username: ubuntu
password: ubuntu

Run Playbook

ansible-playbook qemu-vm.yml

VM Install Option 2 (kvm)

Install Packages (debian)

$ apt install qemu-kvm libvirt virt-install virt-viewer ansible

Install Packages (archlinux)

$ pacman -S qemu libvirt virt-install virt-viewer ansible

Run Playbook

ansible-playbook kvm-vm.yml


The ubunutu autoinstall can only handle basic installation. We require a more powerful tool to configure the post-install environment. For this reason, we will be using ansible.

First, install ansible and sshpass. Perform all the following commands in the post-install/ directory.

Also install the extra roles

$ ansible-galaxy install -r requirements.yml

Check that ansible can talk to the vm:

$ ansible -m ping all

We can now complete the rest of the post-install with

$ ansible-playbook -K playbook.yml

System Details

$ lsblk
vda            252:0    0   10G  0 disk  
├─vda1         252:1    0  500M  0 part  /boot/efi
└─vda2         252:2    0    9G  0 part  
  └─md0          9:0    0    9G  0 raid1 
    └─vg0-root 253:0    0    8G  0 lvm   /
vdb            252:16   0   10G  0 disk  
├─vdb1         252:17   0  500M  0 part  
└─vdb2         252:18   0    9G  0 part  
  └─md0          9:0    0    9G  0 raid1 
    └─vg0-root 253:0    0    8G  0 lvm   /
vdc            252:32   0   10G  0 disk  
├─vdc1         252:33   0   10G  0 part  
└─vdc9         252:41   0    8M  0 part  
vdd            252:48   0   10G  0 disk  
├─vdd1         252:49   0   10G  0 part  
└─vdd9         252:57   0    8M  0 part  
vde            252:64   0   10G  0 disk  
├─vde1         252:65   0   10G  0 part  
└─vde9         252:73   0    8M  0 part  
vdf            252:80   0   10G  0 disk  
├─vdf1         252:81   0   10G  0 part  
└─vdf9         252:89   0    8M  0 part  

Drives vda and vdb are for the main filesystem, they use raid1. Drives vdc, vdd, vde and vdf are in a raidz2 zpool.