So, everything is going better than I expected honestly. I REALLY lucked out with the timing on this new server build. I think I went over this in my last post, but for anyone that missed it, as soon as I had created a new VM for network stuff and moved my configuration over to it, and moved the minecraft server over, the old machine would not boot.
So I’ve since moved the old zpool to the new server (Just enough ports on my board for both and a couple of SSDs :DDDD)
But now I have no fucking clue what to do to get these images over.
Here are the snapshots that I can see
I only really give a shit about vm-120. If I try doing a cp or dd of any of the vm-120 disk parts to the new /dev/zvol/rpool/data it tells me that there’s not enough space, even though I should have about 13 TB free.
Looking at lsblk I believe I see why, because the zvols (at least what I think are zvols that are labeled zdXX) are only so many dozens of gigabytes, although proxmox reports the two zpools, pleasework and rpool. but there’s two storage devices listed in proxmox that exist from rpool.
All of that said I’m thinking it’s a matter of creating a new zvol for the old images on the new pool, but then I’m not sure if there’s some kind of chain of commands of send | receive to transfer, and I have no idea what a stream actually is.
the old pool successfully scanned and was upgraded without issue, no errors reported anywhere so I’m not too worried about anything being busted.
I haven’t messed with zvols, but thought they can’t have children datasets like a normal zfs dataset on a pool.
You aren’t possibly trying to send a zvol to be received to an existing zvol, rather than the pool?
seems the I might be able to import the discs to a new vm, but honestly I’m not sure if the files I showed before are zvols or the images themselves. Afaik if they are the images they should have a .raw but I can see part files instead.
for research, have you checked out the below?
It basically say make a backup first, if you can, then create a new vm, with a diferent name, then copy the vm-120 (or vm-101 in user’s case) over the brand new vm?
So yeah, like you just suggested, create and migrate looks like the way to go.
for more reference.
So it seem these are volumes after using -t volume, so it seems to be a matter of duplicating or making qemu able to use the volume.
If you have an auto-snapshotting app installed, when you so zfs list -t snap -r it gives wuite a long list, but ZFS snapshots are practicably free, so the more the merrier
I remember reading something about zvols being in like a dev mode, or share mode, where one is mountable by the host, and the other not mountable, so guest only, but can’t find where I read it.
I find the lack of commandline feedback anoying with linux in general, so would suggest that if you can install it from the repo’s, check out a program called pv.
then if you send zfs datasets, even over ssh, you can pipe it through pv, and it’ll give a running count.
If you know how large the data set is, you can give pv that info and it’ll give aprogress.
so like zfs send oldpool/olddataset@snap | pv | ssh [email protected] -p 1111 zfs receive newpool/newset
which doesn’t really help you, sorry.
Good luck with it though, and let us know any outputs, or responses from proxmox forum
yeah, I saw it’s listed at about 2T, and the linux philosophy of “if it’s all working, we don’t show anything,” doesn’t really help when already a bit stressed!
Eyyyy, it shows on the new pool. nothing else I should be able to successfully stop with the rats nest of PSUs and sata cables after this and get it attached to the new vm.