Making sense of Proxmox bootloaders
Proxmox installer can be quite mysterious, it will try to support all kinds of systems, be it UEFI 1 or BIOS 2 and let you choose several very different filesystems on which the host system will reside. But on one popular setup - UEFI system without SecureBoot on ZFS - it will set you up, out of blue, with a different bootloader than all the others - and it is NOT blue - as GRUB 3 would have been. This is, nowadays, completely unnecessary and confusing.
UEFI or BIOS
There are two widely known types of starting up a system depending on its firmware: the more modern UEFI and - by now also referred to as “legacy” - BIOS. The important difference is where they look for the initial code to execute on the disk, typically referred to as a bootloader. Originally, BIOS implementation looks for a Master Boot Record (MBR), a special sector of disk partitioned under the scheme of the same name. Modern UEFI instead looks for an entire designated EFI System Partition (ESP), which in turn depends on a scheme referred to as GUID Partition Table (GPT).
Legacy CSM mode
It would be natural to expect that a modern UEFI system will only support the newer method - and currently it’s often the case, but some are equipped with so-called Compatibility Support Module (CSM) mode that emulates BIOS behaviour and to complicate matters further, they do work both with the original MBR scheme. Similarly, BIOS booting system can also work with the GPT partitioning scheme - in which case yet another special partition must be present - BIOS boot partition (BBP). Note that there’s firmware out there that can be very creative in guessing how to boot up a system, especially if GPT contains such BBP.
SecureBoot
UEFI boots can further support SecureBoot - a method to ascertain that bootloader has NOT been compromised, e.g. by malware, in a rather elaborate chain of steps, where at different phases cryptographic signatures have to be verified. UEFI first loads its keys, then loads a shim which has to have its signature valid and this component then further validates all the following code that is yet to be loaded. The shim maintains its own Machine Owner Keys (MOK) that it uses to authenticate actual bootloader, e.g. GRUB and then the kernel images. Kernel may use UEFI keys, MOK keys or its own keys to validate modules that are getting loaded further. More would be out of scope of this post, but all of the above puts further requirements on e.g. bootloader setup that need to be accommodated.
The Proxmox way
The official docs on Proxmox bootloader 4 cover almost everything, but without much reasoning. As the installer also needs to support everything, there’s some unexpected surprises if you are e.g. coming from regular Debian install.
First, the partitioning is always GPT and the structure always includes BBP as well as ESP partitions, no matter what bootloader is at play. This is good to know, as many guesses could be often made just by looking at partitioning, but not with Proxmox.
Further, what would be typically in /boot
location can also actually be on the ESP itself - in /boot/efi
as this is always a FAT partition - to better support the non-standard ZFS root. This might be very counter-intuitive to navigate on different installs.
All BIOS booting systems end up booting with the (out of the box) “blue menu” of trusty GRUB. What about the rest?
Closer look
You can confirm a BIOS booting system by querying EFI variables not present on such system with efibootmgr
: 5
efibootmgr -v
EFI variables are not supported on this system.
UEFI systems are all well supported by GRUB as well, so a UEFI system may still use GRUB, but other bootloaders are available. In the mentioned instance of ZFS install on a UEFI system without SecureBoot and only then, a completely different bootloader will be at play - systemd-boot. 6
Recognisable by its spartan all-black boot menu, systemd-boot - which shows virtually no hints on any options, let alone hotkeys - has its EFI boot entry marked discreetly as Linux Boot Manager - which can be also verified from a running system:
efibootmgr -v | grep -e BootCurrent -e systemd -e proxmox
BootCurrent: 0004
Boot0004* Linux Boot Manager HD(2,GPT,198e93df-0b62-4819-868b-424f75fe7ca2,0x800,0x100000)/File(\EFI\systemd\systemd-bootx64.efi)
Meanwhile with GRUB as a bootloader - on a UEFI system - the entry is just marked as proxmox
:
BootCurrent: 0004
Boot0004* proxmox HD(2,GPT,51c77ac5-c44a-45e4-b46a-f04187c01893,0x800,0x100000)/File(\EFI\proxmox\shimx64.efi)
If you want to check whether SecureBoot is enabled on such system, mokutil
7 comes to assist:
mokutil --sb-state
Confirming either:
SecureBoot enabled
or:
SecureBoot disabled
Platform is in Setup Mode
All at your disposal
The above methods are quite reliable, better than attempting to assess what’s present from looking at the available tooling. Proxmox simply equips you with all of the tools for all the possible boots, which you can check:
apt list --installed grub-pc grub-pc-bin grub-efi-amd64 systemd-boot
grub-efi-amd64/now 2.06-13+pmx2 amd64 [installed,local]
grub-pc-bin/now 2.06-13+pmx2 amd64 [installed,local]
systemd-boot/now 252.31-1~deb12u1 amd64 [installed,local]
While this cannot be used to find out how the system has booted up, e.g. grub-pc-bin
is the BIOS bootloader 8, but with grub-pc
9 NOT installed, there was no way to put BIOS boot setup into place here. Unless it got removed since - this is important to keep in mind when following generic tutorials on handling booting.
One can simply start using the wrong commands for the wrong install with Proxmox, in terms of updating bootloader. The installer itself should be presumed to produce the same system type install as into which it managed to boot itself, but what happens afterwards can change this.
Why is it this way
The short answer would be: due to historical reasons, as official docs would attest to. 10 GRUB had once limited support for ZFS, this would eventually cause issues e.g. after a pool upgrade. So systemd-boot was chosen as a solution, however it was not good enough for the SecureBoot at the time when it came in v8.1. Essentially and for now, GRUB appears to be the more robust bootloader, at least until UKIs take over. 11 While this was all getting a bit complicated, at least there was meant to be a streamlined method to manage it.
Proxmox boot tool
The proxmox-boot-tool
(originally pve-efiboot-tool
) was apparently meant to assist with some of these woes. It was meant to be opt-in for setups exactly like ZFS install. Further features are present, such as “synchronising” ESP partitions in mirrored installs or pinning kernels. It abstracts from the mechanics described here, but brings blur into understanding them, especially as it has no dedicated manual page or further documentation than the already referenced generic section on all things bootloading. 4 The tool has a simple help
argument which throws out the a summary of supported sub-commands:
proxmox-boot-tool help
Kernel pinning options skipped, reformatted for readability:
format <partition> [--force]
format <partition> as EFI system partition. Use --force to format
even if <partition> is currently in use.
init <partition>
initialize EFI system partition at <partition> for automatic
synchronization of Proxmox kernels and their associated initrds.
reinit
reinitialize all configured EFI system partitions
from /etc/kernel/proxmox-boot-uuids.
clean [--dry-run]
remove no longer existing EFI system partition UUIDs
from /etc/kernel/proxmox-boot-uuids. Use --dry-run
to only print outdated entries instead of removing them.
refresh [--hook <name>]
refresh all configured EFI system partitions.
Use --hook to only run the specified hook, omit to run all.
---8<---
status [--quiet]
Print details about the ESPs configuration.
Exits with 0 if any ESP is configured, else with 2.
But make no mistake, this tool is not at use on e.g. BIOS install or non-ZFS UEFI installs.
Better understanding
If you are looking to thoroughly understand the (not only) EFI boot process, there are certainly resources around, beyond reading through specifications, typically dedicated to each distribution as per their practices. Proxmox add complexity due to the range of installation options they need to cover, uniform partition setup (all the same for any install, unnecessarily) and not-so-well documented deviation in the choice of their default bootloader which does not serve its original purpose anymore.
If you wonder whether to continue using systemd-boot (which has different configuration locations than GRUB) for that sole ZFS install of yours, while (almost) everyone out there as-of-today uses GRUB, there’s a follow-up guide available on replacing the systemd-boot with regular GRUB which does so manually, to also make it completely transparent, how the systems works. It also glances at removing the unnecessary BIOS boot partition, which may pose issues on some legacy systems. Taking that path will also allow you to basically “opt out” from the proxmox-boot-tool, which was meant to be opt-in originally, but is imposed on you in this case.
That said, you can continue using systemd-boot, or even venture to switch to it instead (some prefer its simplicity - but only possible for UEFI installs), just keep in mind that most instructions out there assume GRUB is at play and adjust your steps accordingly.
https://uefi.org/sites/default/files/resources/UEFI_Spec_2_9_2021_03_18.pdf ↩︎
https://www.scs.stanford.edu/nyu/04fa/lab/specsbbs101.pdf ↩︎
https://manpages.debian.org/bookworm/efibootmgr/efibootmgr.8.en.html ↩︎
https://www.freedesktop.org/wiki/Software/systemd/systemd-boot/ ↩︎
https://manpages.debian.org/bookworm/mokutil/mokutil.1.en.html ↩︎
https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool ↩︎
https://github.com/uapi-group/specifications/blob/main/specs/unified_kernel_image.md ↩︎