A simple way to automate RHEL VM creation

I needed a quick way to create some virtual machines running RHEL 9 and 10, without going through manual steps.

Image creation

There are several ways to create a RHEL system image. One can boot an installation ISO and control the entire process by hand. Other would be booting the same ISO with some Kickstart file - but it needs to be injected into the process or hosted somewhere.

Another way is to use some tool which will build an image for you. Instead of checking each of them I asked my colleagues. One of them pointed me to the Red Hat Insights Image Builder.

Another link I got was Ansible Image Builder repo by Eric Nothen. Simple to use solution to my needs. And easy to install:

$ ansible-galaxy collection install enothen.image_builder

API access

The Red Hat Customer Portal allows to create API tokens which stay active as long as you use them (they will expire after 30 days of inactivity).

So, I went there, created a token and stored it in the Ansible vault (as in usual for guarding access to secrets).

Image definition

For a start I had a simple need — a basic image with some packages added:

packages:
  base:
    - jq
    - sysstat
    - kernel
    - kernel-64k
    - git
    - make

This went into the ‘group_vars/all/main.yaml’ file and then the rest of customisations followed:

hrw:
  locale:
    keyboard: pl
    languages:
      - pl_PL.UTF-8
  firewall:
    services:
      enabled:
        - ssh
  services:
    enabled:
      - crond
      - systemd-journald
      - rsyslog
    masked:
      - nfs-server
      - rpcbind
      - autofs
      - bluetooth
      - nftables
  users:
    - name: marcin
      groups:
        - wheel
      password: "{{ 'some testing password' | ansible.builtin.password_hash }}"
      hasPassword: true

And finally an example image:

images:
  - name: rhel-9
    rhel_version: 9
    distribution: rhel-9
    requests:
      architecture: aarch64
    customizations:
      filesystem: "{{ filesystems.base }}"
      packages: "{{ packages.base }}"
      locale: "{{ hrw.locale }}"
      firewall: "{{ hrw.firewall }}"
      services: "{{ hrw.services }}"
      users: "{{ hrw.users }}"

At this stage, the use of the example/playbooks/create-and-download.yaml playbook would be enough to download images.

Creating virtual machine

Getting a working image was one part of the task. The next one was creation of a virtual machine with that image.

For that, I used community.libvirt collection from Ansible Galaxy. Ready to use, no need to re-invent the wheel.

I created a single tasks file which creates a VM with 8 GB of memory, 8 vcpu cores and the disk image fetched in one of the previous steps:

---
- name: Create VM for {{ image.name }}
  community.libvirt.virt:
    command: define
    xml: "{{ lookup('template', 'vm.xml.j2') }}"
  vars:
    vm_setup:
      name: "{{ image.name }}"
      memory_size: 8388608 # 8GB
      core_count: 8
      disk_path: "{{ storage_dir }}/{{ image.name }}.qcow2"
      rhel_version: "{{ image.rhel_version }}"

The worst part was the creation of the XML template required by libvirt. I took the easy way and dumped the definition of a RHEL VM machine I had done in the past and then changed it to use variables:

<domain type='kvm' id='2'>
  <name>{{ vm_setup.name }}</name>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://redhat.com/rhel/{{ vm_setup.rhel_version }}-unknown"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory unit='KiB'>{{ vm_setup.memory_size }}</memory>
  <currentMemory unit='KiB'>{{ vm_setup.memory_size }}</currentMemory>
  <vcpu placement='static'>{{ vm_setup.core_count }}</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os firmware='efi'>
    <type arch='aarch64' machine='virt'>hvm</type>
    <firmware>
      <feature enabled='no' name='enrolled-keys'/>
      <feature enabled='no' name='secure-boot'/>
    </firmware>
    <loader readonly='yes' type='pflash' format='qcow2'>/usr/share/edk2/aarch64/QEMU_EFI-silent-pflash.qcow2</loader>
    <nvram template='/usr/share/edk2/aarch64/vars-template-pflash.qcow2' templateFormat='qcow2' format='qcow2'>/var/lib/libvirt/qemu/nvram/{{ vm_setup.name }}_VARS.qcow2</nvram>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <gic version='3'/>
  </features>
  <cpu mode='host-passthrough' check='none'/>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-aarch64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='{{ vm_setup.disk_path }}' index='1'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x8'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x9'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0xa'/>
      <alias name='pci.3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0xb'/>
      <alias name='pci.4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0xc'/>
      <alias name='pci.5'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0xd'/>
      <alias name='pci.6'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0xe'/>
      <alias name='pci.7'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
    </controller>
    <controller type='pci' index='8' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='8' port='0xf'/>
      <alias name='pci.8'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/>
    </controller>
    <controller type='pci' index='9' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='9' port='0x10'/>
      <alias name='pci.9'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='10' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='10' port='0x11'/>
      <alias name='pci.10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='11' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='11' port='0x12'/>
      <alias name='pci.11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='12' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='12' port='0x13'/>
      <alias name='pci.12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='13' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='13' port='0x14'/>
      <alias name='pci.13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='14' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='14' port='0x15'/>
      <alias name='pci.14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </controller>
    <interface type='network'>
      <source network='default' bridge='virbr0'/>
      <target dev='vnet1'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/5'/>
      <target type='system-serial' port='0'>
        <model name='pl011'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/5'>
      <source path='/dev/pts/5'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/run/libvirt/qemu/channel/2-{{ vm_setup.name }}/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='keyboard' bus='usb'>
      <address type='usb' bus='0' port='2'/>
    </input>
    <tpm model='tpm-tis'>
      <backend type='emulator' version='2.0'/>
      <alias name='tpm0'/>
    </tpm>
    <graphics type='spice' port='5900' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
      <image compression='off'/>
    </graphics>
    <video>
      <model type='virtio' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <alias name='rng0'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </rng>
  </devices>
</domain>

Some parts of the template could be dropped — for most of time I do not need things like a graphics card or USB devices.

The final playbook

The final playbook did two things:

---
- name: Build image on Red Hat Image Builder and create VM
  hosts: localhost
  gather_facts: false
  become: false

  vars_files:
    - group_vars/all/main.yaml

  tasks:
    - name: Handle Red Hat Image Builder
      block:
        - name: Check if image already exists on disks
          tags: filecheck
          ansible.builtin.import_role:
            name: enothen.image_builder.check_images_exist

        - name: Get refresh token
          tags: token
          ansible.builtin.import_role:
            name: enothen.image_builder.get_refresh_token

        - name: Request creation of images
          tags: request
          ansible.builtin.import_role:
            name: enothen.image_builder.request_image_creation

        - name: Verify composes finished
          tags: verify
          ansible.builtin.import_role:
            name: enothen.image_builder.verify_compose_finished

        - name: Download images
          tags: download
          ansible.builtin.import_role:
            name: enothen.image_builder.download_images

    - name: Create VM
      ansible.builtin.include_role:
        name: vm
      loop: "{{ images }}"
      loop_control:
        loop_var: image

Result

As a result I got a set of virtual machines with exactly what I wanted:

Next steps will be handling things like “subscription-manager” and running locally built test kernels. But that’s a topic for another time.

aarch64 ansible development red hat