Restore entire host from backup
Previously, we have created a full root filesystem backup of our host. It’s time to create a freshly restored host from it - one that may or may not share the exact same disk capacity, partitions or even filesystems. This is also a perfect opportunity to also change e.g. filesystem properties that cannot be further equally manipulated after install.
Full restore principle
We have the most important part of a system - the contents of the root filesystem in a an archive created with stock tar
1 tool - with preserved permissions and correct symbolic links. There is absolutely NO need to go about attempting to recreate some low-level disk structures according to the original, let alone clone actual blocks of data. If anything, our restored backup should result in a defragmented system.
Important
This guide assumes you have backed up non-root parts of your system (such as guests) separately and/or that they reside on shared storage anyhow, which should be a regular setup for any serious, certainly production-like, system.
Only two components are missing to get us running:
- a partition to restore it onto; and
- a bootloader that will bootstrap the system.
Note
The origin of the backup in terms of configuration does NOT matter. If we were e.g. changing mountpoints, we might need to adjust a configuration file here or there after the restore at worst. Original bootloader is also of little interest to us as we had NOT even backed it up.
UEFI system with ZFS
We will take an example of a UEFI boot with ZFS on root as our target system, we will however make a few changes and add a SWAP partition compared to what such stock PVE install would provide.
A live system to boot into is needed to make this happen. This could be - generally speaking - regular Debian, 2 but for consistency, we will boot with the not-so-intuitive option of the ISO installer, 3 exactly as before during the making of the backup - this part is skipped here.
Warning
We are about to destroy ANY AND ALL original data structures on a disk of our choice where we intend to deploy our backup. It is prudent to only have the necessary storage attached so as not to inadvertently perform this on the “wrong” target device. Further, it would be unfortunate to detach the “wrong” devices by mistake to begin with, so always check targets by e.g. UUID, PARTUUID, PARTLABEL with blkid
4 before proceeding.
Once booted up into the live system, we set up network and SSH access as before - this is more comfortable, but not necessary. However, as our example backup resides on a remote system, we will need it for that purpose, but everything including e.g. pre-prepared scripts can be stored on a locally attached and mounted backup disk instead.
Disk structures
This is a UEFI system and we will make use of disk /dev/sda
as target in our case.
Caution
You want to adjust this accordingly to your case, sda
is typically the sole attached SATA disk to any system. Partitions are then numbered with a suffix, e.g. first one as sda1
. In case of and NVMe disk, it would be a bit different with nvme0n1
for the entire device and first partition designated nvme0n1p1
. The first 0
refers to the controller.
Be aware that these names are NOT fixed across reboots, i.e. what was designated as sda
before might appear as sdb
on a live system boot.
We can check with lsblk
5 what is available at first, but ours is virtually empty system:
lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
loop0 squashfs 4.0
loop1 squashfs 4.0
sr0 iso9660 PVE 2024-11-20-21-45-59-00 0 100% /cdrom
sda
Another view of the disk itself with sgdisk
: 6
sgdisk -p /dev/sda
Creating new GPT entries in memory.
Disk /dev/sda: 134217728 sectors, 64.0 GiB
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 83E0FED4-5213-4FC3-982A-6678E9458E0B
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 134217694
Partitions will be aligned on 2048-sector boundaries
Total free space is 134217661 sectors (64.0 GiB)
Number Start (sector) End (sector) Size Code Name
Note
We will make use of sgdisk
as this allows us good reusability and is more error-proof, but if you like the interactive way, plain gdisk
7 is at your disposal to achieve the same.
Despite our target appears empty, we want to make sure there will not be any confusing filesystem or partition table structures left behind from before:
Warning
The below is destructive to ALL PARTITIONS on the disk. If you only need to wipe some existing partitions or their content, skip this step and adjust the rest accordingly to your use case.
wipefs -ab /dev/sda
sgdisk -Zo /dev/sda
Creating new GPT entries in memory.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
The operation has completed successfully.
The wipefs
8 helps with destroying anything not known to sgdisk
. You can use wipefs /dev/sda*
(without the -a
option) to actually see what is about to be deleted. Nevertheless, the -b
options creates backups of the deleted signatures in the home directory.
Partitioning
Time to create the partitions. We do NOT need a BIOS boot partition on an EFI system, we will skip it, but in line with Proxmox designations, we will make partition 2 the EFI partition and partition 3 the ZFS pool partition. We, however, want an extra partition at the end, for SWAP.
sgdisk -n "2:1M:+1G" -t "2:EF00" /dev/sda
sgdisk -n "3:0:-16G" -t "3:BF01" /dev/sda
sgdisk -n "4:0:0" -t "4:8200" /dev/sda
The EFI System Partition is numbered as 2
, offset from the beginning 1M
, sized 1G
and it has to have type EF00
. Partition 3
immediately follows it, fills up the entire space in between except for the last 16G
and is marked (not entirely correctly, but as per Proxmox nomenclature) as BF01
, a Solaris (ZFS) partition type. Final partition 4
is our SWAP and designated as such by type 8200
.
Tip
You can list all types with sgdisk -L
- these are the short designations, partition types are also marked by PARTTYPE
and that could be seen e.g. lsblk -o+PARTTYPE
- NOT to be confused with PARTUUID
. It is also possible to assign partition labels (PARTLABEL
), with sgdisk -c
, but is of little functional use unless used for identification by the /dev/disk/by-partlabel/
which is less common.
As for the SWAP partition, this is just an example we are adding in here, you may completely ignore it. Further, the spinning disk aficionados will point out that the best practice for SWAP partition is to reside at the beginning of the disk due to performance considerations and they would be correct - that’s of less practicality nowadays. We want to keep with Proxmox stock numbering to avoid confusion. That said, partitions do NOT have to be numbered as laid out in terms of order. We just want to keep everything easy to orient (not only) ourselves in.
Tip
If you got to idea of adding a regular SWAP partition to your existing ZFS install, you may use it to your benefit, but if you are making a new install, you can leave yourself some free space at the end in the advanced options of the installer 9 and simply create that one additional partition later.
We will now create FAT filesystem on our EFI System Partition and prepare the SWAP space:
mkfs.vfat /dev/sda2
mkswap /dev/sda4
Let’s check, specifically for PARTUUID
and FSTYPE
after our setup:
lsblk -o+PARTUUID,FSTYPE
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS PARTUUID FSTYPE
loop0 7:0 0 103.5M 1 loop squashfs
loop1 7:1 0 508.9M 1 loop squashfs
sr0 11:0 1 1.3G 0 rom /cdrom iso9660
sda 253:0 0 64G 0 disk
|-sda2 253:2 0 1G 0 part c34d1bcd-ecf7-4d8f-9517-88c1fe403cd3 vfat
|-sda3 253:3 0 47G 0 part 330db730-bbd4-4b79-9eee-1e6baccb3fdd zfs_member
`-sda4 253:4 0 16G 0 part 5c1f22ad-ef9a-441b-8efb-5411779a8f4a swap
ZFS pool
And now the interesting part, we will create the ZFS pool and the usual datasets - this is to mimic standard PVE install, 10 but the most important one is the root one, obviously. You are welcome to tweak the properties as you wish. Note that we are referencing our vdev
by its PARTUUID
here that we took from above off the zfs_member
partition we had just created.
zpool create -f -o cachefile=none -o ashift=12 rpool /dev/disk/by-partuuid/330db730-bbd4-4b79-9eee-1e6baccb3fdd
zfs create -u -p -o mountpoint=/ rpool/ROOT/pve-1
zfs create -o mountpoint=/var/lib/vz rpool/var-lib-vz
zfs create rpool/data
zfs set atime=on relatime=on compression=on checksum=on copies=1 rpool
zfs set acltype=posix rpool/ROOT/pve-1
Most of the above is out of scope for this post, but the best sources of information are to be found within the OpenZFS documentation of the respective commands used: zpool-create
11, zfs-create
12, zfs-set
13 and the ZFS dataset properties manual page. 14
Tip
This might be a good time to consider e.g. atime=off
to avoid extra writes on just reading the files. For root dataset specifically, setting a refreservation
might be prudent as well.
With SSD storage, you might consider also autotrim=on
on rpool
- this is a pool property. 15
There’s absolutely no output after a successful run of the above.
The situation can be checked with zpool status
: 16
pool: rpool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
330db730-bbd4-4b79-9eee-1e6baccb3fdd ONLINE 0 0 0
errors: No known data errors
And zfs list
: 17
NAME USED AVAIL REFER MOUNTPOINT
rpool 996K 45.1G 96K none
rpool/ROOT 192K 45.1G 96K none
rpool/ROOT/pve-1 96K 45.1G 96K /
rpool/data 96K 45.1G 96K none
rpool/var-lib-vz 96K 45.1G 96K /var/lib/vz
Now let’s have this all mounted in our /mnt
on the live system - best to test it with export
18 and subsequent import
19 of the pool:
zpool export rpool
zpool import -R /mnt rpool
Restore the backup
Our remote backup is still where we left it, let’s mount it with sshfs
20 - read-only, to be safe:
apt install -y sshfs
mkdir /backup
sshfs -o ro root@10.10.10.11:/root /backup
And restore it:
tar -C /mnt -xzvf /backup/backup.tar.gz
Bootloader
We just need to add the bootloader. As this is ZFS setup by Proxmox, they like to copy everything necessary off the ZFS pool into the EFI System Partition itself - for the bootloader to have a go at it there and not worry about nuances of its particular support level of ZFS.
For the sake of brevity, we will use their own script to do this for us, better known as proxmox-boot-tool
. 21
We need it to think that it is running on the actual system (which is not booted). We already know of the chroot
22, but here we will also need bind mounts 23 so that some special paths are properly accessing from the running (the current live-booted) system:
for i in /dev /proc /run /sys /sys/firmware/efi/efivars ; do mount --bind $i /mnt$i; done
chroot /mnt
Now we can run the tool - it will take care of reading the proper UUID itself, the clean
command then removes the old remembered from the original system - off which this backup came.
proxmox-boot-tool init /dev/sda2
proxmox-boot-tool clean
We can exit the chroot environment and unmount the binds:
exit
for i in /dev /proc /run /sys/firmware/efi/efivars /sys ; do umount /mnt$i; done
Whatever else
We almost forgot that we wanted this new system be coming up with a new SWAP. We had it prepared, we only need to get it mounted at boot time. It just needs to be referenced in /etc/fstab
, 24 but we are out of chroot already, nevermind - we do not need it for appending a line to a single config file - /mnt/etc/
is the location of the target system’s /etc
directory now:
cat >> /mnt/etc/fstab <<< "PARTUUID=5c1f22ad-ef9a-441b-8efb-5411779a8f4a sw swap none 0 0"
Note
We use the PARTUUID
we took note of from above on the swap
partition.
Done
And we are done, export the pool and reboot
or poweroff
as needed: 25
zpool export rpool
poweroff -f
Happy booting into your newly restored system - from a tar
archive, no special tooling needed. Restorable onto any target, any size, any bootloader with whichever new partitioning you like.
https://cdimage.debian.org/debian-cd/current-live/amd64/iso-hybrid/ ↩︎
https://www.proxmox.com/en/downloads/proxmox-virtual-environment/iso ↩︎
https://manpages.debian.org/bookworm/util-linux/blkid.8.en.html ↩︎
https://manpages.debian.org/bookworm/util-linux/lsblk.8.en.html ↩︎
https://manpages.debian.org/bookworm/gdisk/sgdisk.8.en.html ↩︎
https://manpages.debian.org/bookworm/gdisk/gdisk.8.en.html ↩︎
https://manpages.debian.org/bookworm/util-linux/wipefs.8.en.html ↩︎
https://github.com/proxmox/pve-installer/blob/e66fb5bbdf8a43d1bb9068f69ec78c3f68bc2017/Proxmox/Install.pm#L206 ↩︎
https://openzfs.github.io/openzfs-docs/man/master/8/zpool-create.8.html ↩︎
https://openzfs.github.io/openzfs-docs/man/master/8/zfs-create.8.html ↩︎
https://openzfs.github.io/openzfs-docs/man/master/8/zfs-set.8.html ↩︎
https://openzfs.github.io/openzfs-docs/man/master/7/zfsprops.7.html ↩︎
https://openzfs.github.io/openzfs-docs/man/master/7/zpoolprops.7.html ↩︎
https://openzfs.github.io/openzfs-docs/man/master/8/zpool-status.8.html ↩︎
https://openzfs.github.io/openzfs-docs/man/master/8/zfs-list.8.html ↩︎
https://openzfs.github.io/openzfs-docs/man/master/8/zpool-export.8.html ↩︎
https://openzfs.github.io/openzfs-docs/man/master/8/zpool-import.8.html ↩︎
https://manpages.debian.org/bookworm/sshfs/sshfs.1.en.html ↩︎
https://pve.proxmox.com/wiki/Host_Bootloader#sysboot_proxmox_boot_tool ↩︎
https://manpages.debian.org/bookworm/coreutils/chroot.8.en.html ↩︎
https://manpages.debian.org/bookworm/mount/mount.8.en.html#Bind_mount_operation ↩︎
https://manpages.debian.org/bookworm/mount/fstab.5.en.html ↩︎
https://manpages.debian.org/bookworm/runit-init/reboot.8.en.html ↩︎