More ZFS happy fun times

,

So, everything is going better than I expected honestly. I REALLY lucked out with the timing on this new server build. I think I went over this in my last post, but for anyone that missed it, as soon as I had created a new VM for network stuff and moved my configuration over to it, and moved the minecraft server over, the old machine would not boot.

So I’ve since moved the old zpool to the new server (Just enough ports on my board for both and a couple of SSDs :DDDD)

But now I have no fucking clue what to do to get these images over.
Here are the snapshots that I can see


So in the old pool located at /dev/zvol/pleasework/data I can see these images

I only really give a shit about vm-120. If I try doing a cp or dd of any of the vm-120 disk parts to the new /dev/zvol/rpool/data it tells me that there’s not enough space, even though I should have about 13 TB free.

Looking at lsblk I believe I see why, because the zvols (at least what I think are zvols that are labeled zdXX) are only so many dozens of gigabytes, although proxmox reports the two zpools, pleasework and rpool. but there’s two storage devices listed in proxmox that exist from rpool.

All of that said I’m thinking it’s a matter of creating a new zvol for the old images on the new pool, but then I’m not sure if there’s some kind of chain of commands of send | receive to transfer, and I have no idea what a stream actually is.

the old pool successfully scanned and was upgraded without issue, no errors reported anywhere so I’m not too worried about anything being busted.

1 Like

If the VM is configured … Shouldn’t proxmox have the ability to migrate vm storage from one pool to the other?

1 Like

You’d think…maybe I can just run qm migrate?

1 Like

Are both zpools configured as vm image storage?

1 Like

For ISO images just the new pool. Both pools are showing as capable of disk images though.

1 Like

I haven’t messed with zvols, but thought they can’t have children datasets like a normal zfs dataset on a pool.
You aren’t possibly trying to send a zvol to be received to an existing zvol, rather than the pool?

Totally possible.

seems the I might be able to import the discs to a new vm, but honestly I’m not sure if the files I showed before are zvols or the images themselves. Afaik if they are the images they should have a .raw but I can see part files instead.

1 Like

Then again these are all the places where my vm images could exist with a find vm-120 / | grep vm-120
image

1 Like

I guess I could create a VM and migrate the disk to the new VM? Maybe then I could at least see the data in the new VM.

1 Like

for research, have you checked out the below?
It basically say make a backup first, if you can, then create a new vm, with a diferent name, then copy the vm-120 (or vm-101 in user’s case) over the brand new vm?

So yeah, like you just suggested, create and migrate looks like the way to go.

maybe try with a less important one first?

1 Like

image
for more reference.
So it seem these are volumes after using -t volume, so it seems to be a matter of duplicating or making qemu able to use the volume.

And no, but that seems like the same issue.

1 Like

Okay, thank, and yeah looks similar issue.

If you have an auto-snapshotting app installed, when you so zfs list -t snap -r it gives wuite a long list, but ZFS snapshots are practicably free, so the more the merrier :slight_smile:

actually I only have one snapshot of the vm in question ;>_>

It’s okay I created another just now and it seems fine.

I’m trying it, wish me luck.

1 Like

Good luck, keep us posted.

I remember reading something about zvols being in like a dev mode, or share mode, where one is mountable by the host, and the other not mountable, so guest only, but can’t find where I read it.

Good luck though

I find the lack of commandline feedback anoying with linux in general, so would suggest that if you can install it from the repo’s, check out a program called pv.
then if you send zfs datasets, even over ssh, you can pipe it through pv, and it’ll give a running count.
If you know how large the data set is, you can give pv that info and it’ll give aprogress.

so like zfs send oldpool/olddataset@snap | pv | ssh [email protected] -p 1111 zfs receive newpool/newset
which doesn’t really help you, sorry.

Good luck with it though, and let us know any outputs, or responses from proxmox forum

1 Like

It’s a big boy, still transferring. At least if the cpu usage is anything to go by.

1 Like

yeah, I saw it’s listed at about 2T, and the linux philosophy of “if it’s all working, we don’t show anything,” doesn’t really help when already a bit stressed!

1 Like

Yeah, I run that status=progress on all of my dd commands lol.

1 Like

I can see why they did it that way - when stuff is scripted, the output is easier to parse programatically if it has less feedback,

but it would be handy if it was a little more verbose, but then, I guess that’s why they are GUI’fying as mauch as they can?


Eyyyy, it shows on the new pool. nothing else I should be able to successfully stop with the rats nest of PSUs and sata cables after this and get it attached to the new vm.

1 Like