Migrate bare metal Ubuntu install to proxmox

I have a current server with a couple of HDDs with btrfs and Luks and a pair of SSDs with btrfs and Luks as well.
This is a bare metal Ubuntu install which I run for years now.
Boot drive is also encrypted with Luks and needs my password at boot. (Thanks to pikvm it’s easy)

Now I am building a new rig and this time I want to go the proxmox route.
The old hardware shall be repurposed after the migration.
How would you guys go about the migration?

What storage layout would you recommend for proxmox?
A proxmox drive and separate data drive?

Completely new to proxmox and would like to have some experiences from your side.

Approaches depend what you want to do or gain from using proxmox, and how much do you want to do at once. These are some ideas to look into.

If you have enough new disk you could virtualise the entire existing server (you’ll have a vnc console provided by proxmox for Luks).

You could rebuild all your services into different VM’s.

You could start with the first and gradually move things out from the then-virtualised old system, or build the new platforms now and migrate into them directly from the old system.

I’d use a proxmox boot drive with additional storage (which may even be another storage server) to host the VM’s themselves, though not necessarily attached bulk storage. The question here is do you want proxmox to manage bulk storage, or one of the VM’s to manage its own.

If you plan to use a virtualised file server being given the existing disks, you should pass through the storage not by disk but by passing through an entire HBA (which means, for example that virtualised ubuntu is managing its same old disks). proxmox won’t back up those disks, however.

The alternative is every disk is a virtual disk, that proxmox provides to the VM and lives on storage proxmox manages.

There are pros and cons but each could be appropriate depending on the storage you have in new and old.

I’d suggest to make an exploratory build of what you want to do, before doing anything that removes data from your old server.

My goal would be to move the current Ubuntu server into a VM.
That would also include the disk setup for now:

  • 1 X 128 GB boot disk
  • 4x12 TB HDD btrfs raid 1
  • 2x 500gb SSD btrfs raid 1

Build my new rig with:

  • 2 x 2TB
  • some spare disks for the proxmox install

My idea is to paas all disk directly to a VM so I can boot my old Ubuntu server from disk.
Then I hope I can attach a volume from the 2x2TB proxmox data zfs raid to the VM.
That way I can copy the data from my old SSD btrfs raid to the volume which I hopefully can mount.
I can then go forward and create a new fresh Ubuntu VM and start migrating services. Basically 99% docker compose files and the samba config.

Is that a viable approach?
Currently waiting for last parts and therefore not played with proxmox yet.

I already use an 8 port HBA and passing through should not be an issue.
However I would like to understand what the benefit of passing through the HBA vs passing through a disk is.
Would you elaborate on that a little more?

I am not looking I to clustering proxmox even though I might end up with a second host.

I would love to use HDDs for bulk storage but I do not like zfs inflexibility regarding disk size when you want to expand.
Therefore the 4x12 TB shall remain as is for now.
I am thinking of moving from btrfs raid1 to mergerfs with snapraid but that can be done later in the VM. I should have enough space and some old 4 TB disk laying around which are still good.
The data on these is not critical at all.

If all 7 disks incuding boot are hanging from the hba passed through, it might not be complete plug and play but I don’t see any showstopper except you’ll maybe have to create a way to get to or create an efi partition/disk that can start the OS boot. I’m not sure that you can tell it to boot from a drive on a controller (have never tried).

Worst case you image the current boot/OS disk to a disk image passed into it from proxmox as a virtio disk to boot from, and maybe have to fix some disk lookups in the configuration afterwards (but I’d imagine it’d find them, likely you have uuid definitions which won’t change).

I’d probably recommend this anyway (being you don’t have to offline your other server, except for taking the dd image* until its clone is booting and ready to have the drives & hba transplanted). Or you could just build the new, ‘clean’ one, if it’s not much configuration.

https://www.truenas.com/community/threads/virtualized-truenas-scale-with-passed-through-physical-disks-no-hba-is-it-possible.101759/

this is one reference to ill results from passing through only disks I don’t see any reason to discount. I can forsee the reason of passing through a whole HBA as being - the host OS can never issue a simultaneous query to a disk it can’t see at all, at the same time as a virtualised OS is and assuming it has full control of (on which subject, remember to blacklist the module at proxmox, quite possibly mpt3sas, you’ll have to lspci -k to confirm that).

* refer to qemu-image convert to get a qcow2 out of a dd