LVM2 help on Ubuntu server box - no free space on most block devices in pool

Hi there, hope I have the right category selected - new to L1Tech’s Forums.

I did attempt to search a little before posting this, and I hope how it’s worded makes sense - been fighting a migraine all day long today.

Getting to the point - I have an HP Z600 running Ubuntu 19.10 on bare-metal on a ssd (/dev/sdc1) with various JBOD sized block devices attached as McGyver’d as could be.

To paint the picture of setup - This tower only has slots for two drives, one slot contains a 8TB Seagate (/dev/sdb1), and a 2TB Seagate (/dev/sda1), and from there I have the SSD free floating in the chasis, and to connect the rest of the drives (Another 2TB Seagate, 1TB Toshiba, and a 1TB Western Digital, (/dev/sdd1, /dev/sde1, and /dev/sdf1 respectively in the same listed order) I have them connected by Sata cables some to the motherboard’s Sata controller (on-board), and one to a old raid card I threw in a PCI slot, and all cables running out the back PCI slot into the reverse back of a spare chasis acting as a giant enclosure just free floating. Some of these drives have bad sectors just to make it more interesting including the 8TB. Gotta keep this interesting after all. Everything but the SSD (sdc1) is setup with LVM2, all partitioned by fdisk to Linux LVM type, all pvcreate’d and added to the vgroup, have expanded the filesystem across the entire array, and now have a modest 14TB nas/fileserver with zero redundancy.

I am having a strange issue in where, the Disks app that’s natively built into Ubuntu is showing all block devices including the array. Reports the array as being only 11.5% full as it should (sadly lost 7.4TB’s recently.), and when I run PVS as sudo, it reports 4 out of 5 drives have zero free space on them for I don’t know what reason.

Below is the output of sudo pvs:

PV VG Fmt Attr PSize PFree
/dev/sda1 vgpool lvm2 a-- <1.82t 0
/dev/sdb1 vgpool lvm2 a-- <7.28t 0
/dev/sdd1 vgpool lvm2 a-- <1.82t 0
/dev/sde1 vgpool lvm2 a-- <931.51g 0
/dev/sdf1 vgpool lvm2 a-- <931.51g <36.27g

Just as a side comment - I would ideally like to migrate away from LVM to ZFS and turn the Ubuntu install on the ssd into a vm, and swap everything over to virtualization it should be noted, preferably without data loss but either I can manage that task on my own or will post in a separate topic - just thought I’d mention it for relevance sake. But I do want to solve this lack of free space issue first, so if anyone knows or has any ideas or suggestions as to what could be going on. All drives we’re also pre-formatted to ext4 prior to being added to the pool array.

Thanks in advance for any and all!

The pfree in pvs is the space that is not allocated to an lvm. So if the volume group has that much space allocated to LVM(s) then it is showing correct. To see free space like is reports in the disks app that would be the command df

3 Likes

Ah gotchya, had to re-read your reply again a few hours later once I was more with it and the coffee had a chance to work since first thing this morning when I first read your response.

So to clarify this - PFREE is refering to the amount of free space not already allocated to the array/volume group, and since 100% of those first 4 drives has been fully allocated, the results is 0 as in to say nothing is left to donate to the pool which makes perfect sense, I was reading the chart incorrectly obviously. I was reading it in such as how much of the pool was split across the drives, and how much portition of the data was written on each block device, and of course, that zero free had me thinking the drives we’re max full when of course they are not. Thanks a thousand fold for your assistance thexder1, in your debts for the information.

While you or others are reading this, can I ask a question on how LVM works internally - is there any way for me to export the pool without data loss, where I can re-import it after changing out the OS on my SSD? I’d like to as previously stated swap Ubuntu 19.10 out for say proxmox or xcp-ng or esxi or something along those lines, and have the hypervisor host the pool and share it via nfs or cifs or something to a VM that’ll be a proper server based distro that will handle most read/writes to the lvm pool. Just curious if what I am proposing is doable, or am I likely to loose all of my data again if I even try it, and I should be shrinking my lvm2 group, remove some of the drives from the pool, and convert them to zfs, make a pool on it, and move the data over from one pool to the other? I am under the impression that zfs can be exported/imported, and is idealer for NAS/filevault functionality, although my cousin in law thinks I’m wiser to remain with ext4 for higher performing I/O. So I guess my question still relating to LVM is can it be exported and reimported when switching Distro’s like ZFS, or am I better to change file systems first based on your or anyone else’s superior understandings of HOW LVM works? Is the LVM’s Table of Contents stored on the block devices, or in the existing OS itself only for example? TIA.

To export the pool you would have to unmount it, then run the following commands
“vgchange -a n vgpool” This will deactivate the pool
“vgexport vgpool” This will export the pool so it can later be imported

Once you have the new OS running you can run “pvscan” to make sure it is aware of the pool, then “vgimport vgpool” to import it.

The export/import functionality on lvm is very different then on ZFS. On LVM it is for taking an existing volume group and moving the drives to another systems or re-installing OS then import the information for the volume group so you don’t lose your data. On ZFS that is actually used to make a copy of the pool which can be used for backups, in my experience with ZFS you don’t need to do anything to move the pool to another system when physically moving the drives or reinstalling the OS, once the ZFS module is loaded it scans and finds everything for you.

The IO difference between the file systems will depend quite a bit on what you are using it for. A COW filesystem like ZFS does add quite a bit of overhead, but there are ways to mitigate most of it and other advantages that it allows so I don’t think it is as clear cut as your cousin in law makes it sound.

As for the LVM metadata, that is stored on the drives, and in the OS, but LVM does not automatically read and load the data, I think it is also locked to the system in some way. That is why you need to export and import if changing OS or moving drives to another system. It is possible to recover without that which usually involved rewriting the metadata (running pvcreate on the drives again), but that has always seemed risky to me and with the vgexport and vgimport that just avoids quite a big headache that LVM can cause.

2 Likes