New Proxmox Install with Filesystem Set to ZFS RAID 1 on 2xNVME PCIe SSDs: Optimization Questions

(Apologies if I’ve not tagged this quite right. I did a search first and these tags seemed to be the ones best for Proxmox-related questions.)

Hello,

I hope everyone is doing well. I’m about to install Proxmox for the first time, and have some questions about optimizing the OS’ filesystem during installation.

I’m installing on 2x Sabrent Rocket 4.0 NVME (PCIe 4 @ 500 GB/each) in a PCIe 4.0 motherboard (Asrock Rack x570d4u-2L2T), in ZFS RAID 1 mode. I’m most concerned about redundancy and prolonging the life of the SSDs. I don’t want to eat through the write endurance too fast.

I’ll be running a TrueNAS VM for my mass storage, so the only filesystem I need to set up at installation is for the the RAID 1 OS install.

I noticed there are some options available for configuration in the installer when selecting a ZFS RAID 1 file system, and I’m curious if I should just leave them at their defaults, or whether I need to change them since I’m installing onto an SSD RAID array.

In particular, I have these options available:

  1. Swap size;
  2. Max root;
  3. Min free; and
  4. Max VZ

Is there a best practice for setting these when running off an SSD RAID 1? I don’t want to blow through the write endurance with default settings optimized for spinning HDDs, for instance.

To that end, are there any other settings I need to tweak for an SSD RAID install? At one point I saw a suggestion I did not completely understand about altering the journaling settings, but now I can’t find that.

Thank for your any advice. :slight_smile:

cc: @motorsense

Consider overprovisioning on your drives. Best way to make them more durable. And you can check the SMART values for the disks to see how much data has been written already.

1 Like

Oh, thanks! I’d totally forgotten about over-provisioning. :slight_smile:

Is 10 percent a good over-provisioning value for these drives?
Tom’s Hardware has a good review with all the specs.

NAND: Kioxia 96L TLC
Endurance (TBW): 850 TB
Controller/Interface: Phison E16/PCIe 4.0x4
Capacity: 500 GB

That are good values for 500GB disks. Best Benchmark is just running them and checking on data written in normal use for some days, then divide the 850TB by the SMART values and you get a rough estimate on how long the drives last.

I got 2x NVMe as special vdevs running, but I will replace one drive in half a year or so to make it boot drive for an upcoming new machine and putting in a new one. That way I got two drives with different amounts of data written. The likelihood of losing two identical drives in quick sucession due to identical TBW (which is the case in Raid1/mirror), should be greatly reduced.

But my drives have way less endurance while also being 1TB, but according to Smart, they still gonna last a couple of years until they hit the manufacturer total TBW. But I got other SSDs that are way beyond that value and still are healthy and running.

I personally wouldn’t be concerned with those drives unless you write >1-1.5TB a day on average (which is some heavy-duty work for 500GB, and they’re not SLOGs)

1 Like

Thanks for taking the time to write that out; it’s most reassuring.

I’m running into two problems:

  1. Most install tutorials I find that go into detail about setting up the boot disk(s) assuming it’s being installed on mechanical SSDs. I’m still trying to figure out what settings I need to change in Proxmox to avoid unnecessary wear on the SSD boot drives. The ones I put in the OP are just the ones available during install. I have no idea yet what I should do once I’m actually booted into Proxmox.

  2. All the caching instructions and similar tuning guides for ZFS storage generally assume the ZFS filesystem is set up on spinning HDDs augmented by SSD cache.
    2.1. I chose to invest in speed, silence, and lower power usage/lower heat, so aside from my NVME boot drives I’ve got 16x Enterprise SSDs.
    2.2. All the tutorials I’ve found so far for setting up a ZFS filesystem for mass storage assume an array of spinning HDDs augmented by SSD cache.
    2.3. I don’t believe I need SSD cache on top of SSD mass storage that would run just as fast as the cache, but I can’t find a comprehensive guide to setting up the various ZFS features up with a purely SSD array.