6 x 2TB drive ZFS pool only 2.5 TB in size?

Hey guys,

So i’ve recently reinstalled my Proxmox server after the update from 5.1 to 5.2 royally fucked zfs permissions up (to the point where i couldn’t even reboot the server from the command line). I didn’t have the time or patience to screw around with it so i decided to just reinstall it rather than hassle with it.

So now i’m trying to recreate the main raidz pool that will hold all the containers and VMs. I have 6 x 2TB Hitachi Ultrastar drives set up with drive pass-through on this T610. The OS is installed on 2 mirrored SAS drives. All the Hitachi drives are reporting ok SMART data. The drives are sdb, sdc, sde, sdf, sdg, sdh.

If i run the command sudo zpool create -f -m /data data raidz sdb sdc sde sdf sdg sdh it produces no errors but running zpool list shows that the zfs pool is only 2.45 TB in size. I’ve destroyed and recreated the pool, manually deleted all partitions on the drives and remade the pool, still the same result.

Any ideas on what i’m doing wrong?

Are you not running the pool on the host?

Yes i’m configuring the ZFS pool on the host. I just meant that the LSI RAID controller has been flashed to IT mode to pass the disks straight through, instead of it’s usual fuckery of treating even individual disks as single disk RAID arrays which ZFS hates.

Beginning to question why 5.2 was released on Proxmox stable. This is the worst update i’ve dealt with. Tried to restore a KVM disk image from backup 10 GB in size, hit 27% and nearly stalled. After 15 minutes it was only at 52% so i killed it. CPU IO delay over 15% during the entire backup operation. Tried limiting the bandwidth by editing the vzdump.conf file but that didn’t do anything. Tried restoring to a different volume and now the whole server is unresponsive via the Web GUI, which is the only way i can work with the server while i’m at work.

Think i might just reinstall 5.1 at this point, played with it for less than a day and i’ve already broken this release.