Created attachment 594502 [details] New VM custom config dialog I have the following lines in my /etc/libvirt/qemu.conf: nvram = [ "/usr/share/edk2-ovmf/OVMF_CODE.fd:/usr/share/edk2-ovmf/OVMF_VARS.fd" ] Both files exist: anton@PF16W6Y2 ~ $ ls -l /usr/share/edk2-ovmf/ total 7096 -rw-r--r-- 1 root root 19392 авг 19 20:02 EnrollDefaultKeys.efi -rw-r--r-- 1 root root 1966080 авг 19 20:02 OVMF_CODE.fd -rw-r--r-- 1 root root 1966080 авг 19 20:02 OVMF_CODE.secboot.fd -rw-r--r-- 1 root root 131072 авг 19 20:02 OVMF_VARS.fd -rw-r--r-- 1 root root 131072 авг 19 20:02 OVMF_VARS.secboot.fd -rw-r--r-- 1 root root 940224 авг 19 20:02 Shell.efi -rw-r--r-- 1 root root 1849344 авг 19 20:02 UefiShell.iso however when I try to create a new Uefi x86_64 VM, the list of available UEFI images is hardcoded, not imported from nvram list. See attachment.
app-emulation/qemu-4.1.0 also has bug with missing VARS image for UEFI x86_64 - leading to unability to create a new x86_64 UEFI VM.
This in fact is genuine libvirt bug. Reported also here: https://bugzilla.redhat.com/show_bug.cgi?id=1776949 Fixed in v5.10.0-507-g8e1804f9f6.
mhm. Let's backport the fix to our (currently stable) 5.10.0 then.
*** Bug 698878 has been marked as a duplicate of this bug. ***
I have tried 5.8.0, 5.10.0 and 9999 and they all fail to recognise the nvram list. I tried all these yesterday.
I discovered a fix for this to accommodate usable functionality when using virt-manager by editing the xml template. virt-manager omits adding the nvram path entry in the xml template and nvram entry being added to qemu.conf is supposedly an obsolete approach? this worked for me however <os> <type arch="x86_64" machine="pc-q35-4.2">hvm</type> <loader readonly="yes" secure="no" type="pflash">/usr/share/qemu/edk2-x86_64-code.fd</loader> <nvram>/usr/share/edk2-ovmf/OVMF_VARS.fd</nvram> <boot dev="hd"/> </os> One caveat however that could be useful to investigate is when secure=yes is enabled libirt returns an error stating that SMM is not available and secure boot cannot be used even with the secboot firmware configured for loader and nvram in the xml template.
Summary to my previous comment. After meddling with virt-manager's xml editor a bit more i discovered that the fault here may be how virt-manager is configuring guest vm xml templates when using qemu 4.2.0 and libvirt 5.10.0. I reviewed the xml template for an arch linux guest vm i previously installed using secure boot q35 chipset and an older versions of qemu/libvirt and noticed this in the xml template. <os> <type arch="x86_64" machine="pc-q35-4.0.1">hvm</type> <loader readonly="yes" secure="yes" type="pflash">/usr/share/edk2-ovmf/OVMF_CODE.secboot.fd</loader> <nvram>/var/lib/libvirt/qemu/nvram/archlinux_VARS.fd</nvram> <boot dev="hd"/> <bootmenu enable="no"/> </os> <features> <acpi/> <apic/> <vmport state="off"/> <smm state="on"/> This vm was installed with a previous version of qemu and libvirt which at the time had the nvram variable hard coded in qemu.conf to configure the uefi firmware file paths and this vm guest worked fine after the upgrade to qemu 4.2.0-rX and libvirt 5.10.0. I was creating a new guest vm for windows server 2019 earlier today when i researched this and discovered the workaround fix that i previously mentioned. Upon looking a little closer at the xml template created by virt-manager using libvirt 5.10.0 / qemu 4.2.0-r1 i noticed that the vm template was incorrectly formatted and that qemu or libvirt? had not copied either OVMF_VARS.fd or OVMF_VARS.secboot.fd to /var/lib/libvirt/qemu/nvram/win2k19_VARS.fd In addition to this inclusion of <smm state="on"/> was omitted from the xml config for the w2k19 server guest. I fixed this by copying the missing OVMF_VARS.secboot.fd (if using secure boot uefi) and including <smm state="on"/> in the xml config. <os> <type arch="x86_64" machine="pc-q35-4.2">hvm</type> <loader readonly="yes" secure="yes" type="pflash">/usr/share/edk2-ovmf/OVMF_CODE.secboot.fd</loader> <nvram>/var/lib/libvirt/qemu/nvram/win2k19_VARS.fd</nvram> <boot dev="hd"/> <bootmenu enable="no"/> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state="on"/> <vapic state="on"/> <spinlocks state="on" retries="8191"/> </hyperv> <vmport state="off"/> <smm state="on"/> naturally it appears that altering the nvram/OVMF_VARS.fd file used broke the guest OS install which now boots to system repair but the guest vm started without errors stating that either the vmram store was not available or that smm was not enabled. There is a mechanism i'm unaware of that copies the desired nvram file from within /usr/share/edk2-ovmf/ to /var/lib/libvirt/qemu/nvram/guestvm_VARS.fd that appears to be nonfunctional when the nvram entry is missing from qemu.conf When i addressed these inconsistencies manually the issue was resolved and qemu guest vm's started and functioned without issue
Since the oldest version in the portage is libvirt-6.0.0 I'm inclined to close this one. However, I think we should confirm that the problem went away. Michael, can you please confirm that upgrading to 6.0.0 or newer fixes the bug for you? Thanks.
I'm seeing something really similar to this on a clean install of qemu and libvirt 6.8.0... Meanwhile the one I set up a couple of years ago and upgraded seems to be working correctly. But if I copy the old system's nvram entries over to the new one's qemu.conf file libvirt still claims it can't find any UEFI options. I'm currently playing with USE flags to see if there's something I missed.