testing directly on the disk im able to achieve some reasonable numbers not far away from specsheet => 400-650k IOPS
testing on zvol is almost always limited to about 150-170k IOPS and it dosnt matter what cpu or disks im using
is there a bootleneck in zfs … or am i doing something wrong?
tuning off L2SARC could be a way to get mor ehonest IOPS numbers but ARC?! This is an integral part of zfs - if you screw with it, performance will go down.
I’m not sure you’re going to get ZFS to perform on a single disk, or rather not well.
You’re really intended to have disks for storage and disks for zil and l2arc.
I don’t have any lab kit that I can test this on to compare right now, but my gut is telling me I would see similar results if I used a single disk like that.
@igoodman it is supposed to run kvm, so no point for arc, it still dosnt explain this bottleneck
@cybersplice i have tried with same disks in mirror, mdraid scales up to about 1.1M IOPS, zfs is stuck at ~150K IOPS
the only performance difference im seeing is while testing on a desktop hardware with i7 8700k and ramdisk, im able to reach about 260K IOPS but ZFS eats all cpu
Unless you want to buy some more nodes and use Ceph…!
It will perform better with caching. You can probably get away without zil, but it will be faster with. I appreciate that’s inconvenient when you’ve bought fast SSDs for storage.
Like I say, it just isn’t designed to work without it.
And yeah you don’t get a lot of freedom with proxmox, I guess it makes sense considering their business model and target market.
I guess you could roll your own if you have nothing better to do…
Alternatively, xcp-ng uses mdraid I believe, and works pretty nicely. Not as nicely as Proxmox in the UI Department, but i would expect it will do what you want it to do.