Hi everyone,
I am building a new Proxmox VE server, and I’m looking for some input on the ideal setup for my hardware and use-case. The system will host some Windows guests that will be used for remote workstations.
I have a Supermicro server with 2x Xeon Gold 6346 3.1GHz CPUs and 786GB ECC RAM.
The server’s got 2x 480GB NVMe SSDs that I would like to use for the host OS, and 10x 2TB NVMe SSDs for data. There’s no RAID controller and the disks are presented directly to the OS (nvme0n1 through nvme11n1, with nvme0n1 and nvme1n1 being the two 480GB SSDs).
I would like redundancy on the two OS drives of course, ideally in such a way that if one fails I won’t need to fiddle with anything to get the host to boot and I could get a replacement in to resilver or rebuild while the system runs. The OS doesn’t need to be on ZFS but if that is what would work well I am happy to do so.
The remaining 10x 2TB NVMe’s should form the “data” pool that will store my VM disks, ISOs, and the like. I am thinking of RAIDZ2 with 1 or 2 spares. From my reading, it’s not super clear if I will benefit from special, ZIL, or SLOG vdevs since I will have everything on fast SSDs anyway.
I understand (I think) that spare devices are used in any(?) pool/vdev that experiences a device failure. So if the two 480GB OS devices are in a vdev, and the other 10x 2TB are split with 1 or 2 as spares, maybe some as special/ZIL/SLOG (as appropriate), and the rest for data, I’d hope a spare 2TB device would be used if one of the host/OS drives fails, right?
Are there any Proxmox and/or ZFS experts here that could recommend a good drive layout?
I really appreciate any input you can offer.