Poor speeds with LSI SAS controller

Gb ram per tb storage is the guideline but it is largely workload dependent. You most likely won’t need to exceed that and may be able to get away with less.

Replication in ZFS takes a lot of headache out of backups but of course you would need a separate backup system.

Are you just running some qemu/kvm machines on stock Debian?

all my VMs are qemu/KVM yes.
most guests are debian, one is windows 10.
they use their NVMe devices directly using vfio-pci.
they currently access the network with a simple virtio bridge device. this will change to SR-IOV eventually. i just havent gotten around to it, the simple bridge is able to keep up with my current demands. all of the intense networking is usually between VMs and the host machine.

1 Like

I’d look into proxmox. It is Debian based, can use ZFS out of the box and is qemu/kvm under the hood.

ZFS is happy with 8GB for most things.There’s some formulas that people throw around, but those are pretty much freenas fudd that isn’t true for basic file storage. An enterprise level application with a ton of users and vast amounts of tiny files is another discussion entirely.
Some people use 4GB and even 2 GB and seem fine except for that occasional one guy where things go south.
ZFS will basically try to use 8GB, and if you have more available, it’ll use around half your ram. What it’s doing is using excess ram as a ARC cache, which is pretty nice. You can set it less if this is an issue. For the most part it should free up ram if it’s needed by other things, though I’ve seen mixed reports on this.
For your target TB size, I’d personally consider 16GB plenty of space for ZFS’s ARC cache to be most useful.

DO
NOT
EVER
ENABLE
DEDUPLICATION

Compression yes, but never ever deduplication. It’s one of those things that sounds interesting/good, but will fuck you up because it’s meant for a special enterprise workload that you don’t have. This is when ZFS NEEDS ram, and will shit the bed if you don’t have enough.

3 Likes