Hi there, hope I have the right category selected - new to L1Tech’s Forums.
I did attempt to search a little before posting this, and I hope how it’s worded makes sense - been fighting a migraine all day long today.
Getting to the point - I have an HP Z600 running Ubuntu 19.10 on bare-metal on a ssd (/dev/sdc1) with various JBOD sized block devices attached as McGyver’d as could be.
To paint the picture of setup - This tower only has slots for two drives, one slot contains a 8TB Seagate (/dev/sdb1), and a 2TB Seagate (/dev/sda1), and from there I have the SSD free floating in the chasis, and to connect the rest of the drives (Another 2TB Seagate, 1TB Toshiba, and a 1TB Western Digital, (/dev/sdd1, /dev/sde1, and /dev/sdf1 respectively in the same listed order) I have them connected by Sata cables some to the motherboard’s Sata controller (on-board), and one to a old raid card I threw in a PCI slot, and all cables running out the back PCI slot into the reverse back of a spare chasis acting as a giant enclosure just free floating. Some of these drives have bad sectors just to make it more interesting including the 8TB. Gotta keep this interesting after all. Everything but the SSD (sdc1) is setup with LVM2, all partitioned by fdisk to Linux LVM type, all pvcreate’d and added to the vgroup, have expanded the filesystem across the entire array, and now have a modest 14TB nas/fileserver with zero redundancy.
I am having a strange issue in where, the Disks app that’s natively built into Ubuntu is showing all block devices including the array. Reports the array as being only 11.5% full as it should (sadly lost 7.4TB’s recently.), and when I run PVS as sudo, it reports 4 out of 5 drives have zero free space on them for I don’t know what reason.
Below is the output of sudo pvs:
PV VG Fmt Attr PSize PFree
/dev/sda1 vgpool lvm2 a-- <1.82t 0
/dev/sdb1 vgpool lvm2 a-- <7.28t 0
/dev/sdd1 vgpool lvm2 a-- <1.82t 0
/dev/sde1 vgpool lvm2 a-- <931.51g 0
/dev/sdf1 vgpool lvm2 a-- <931.51g <36.27g
Just as a side comment - I would ideally like to migrate away from LVM to ZFS and turn the Ubuntu install on the ssd into a vm, and swap everything over to virtualization it should be noted, preferably without data loss but either I can manage that task on my own or will post in a separate topic - just thought I’d mention it for relevance sake. But I do want to solve this lack of free space issue first, so if anyone knows or has any ideas or suggestions as to what could be going on. All drives we’re also pre-formatted to ext4 prior to being added to the pool array.
Thanks in advance for any and all!