I’ve slowly been moving over from a ubuntu 20.04 home server towards a more resilient option.
Old setup:
NVME ssd for boot and OS
2TB sata ssd for fast storage
4 2.5 inch HDD’s for mergerfs+snapraid storage
I have setup a new proxmox server with pretty killer hardware:
Mobo with IPMI and 32gb ECC
2 480gb synology SATA enterprise drives to function as boot - Setup now as ZFS raid 1 boot
2 Samsung pm9a3 8TB U.2 ssd’s to function as general purpose drives - Setup as BTRFS raid 1
The mergerfs setup I want to move over and only use for big files
I setup a starting Fedora Server VM (ext4), and configured it entirely with ansible. I experimented with Paperless and Immich on podman, so those container are present, but not used in production just yet. They are basically idling.
One of my pm9a3 is second hand, and the other is new (bought when flash crashed). The new one had no writes.
After about 30 days of idling the drive has around 800GB of writes and 6GB of reads. This looks absurd to me, I read about write amplification but from idling!? I have not even setup prometheus and homeassistant, which I think will be the big writers.
Theoretically, even with 1TB of writes per day the drives should still last like 40 years. But its making me second-guess my choice of proxmox.
Currently I run all my applications on ubuntu in docker.
I want to replicate most of that in a VM on proxmox, though probably with podman.
Move pihole and homeassistant seperate (LXC and VM respectively). And be able to setup a playground enviroment on the fly.
Is this wear and tear just normal? Should I just not worry about it? Should I just move to Fedora Server to run the containers native, and use cockpit to setup Homeassistant and playgrounds?
That is not killer hardware These are rookie numbers!
Just a small heads up. Proxmox (maybe every ZFS OS?) has not really implemented an automatic way to boot from one device, if the other failed. Think it has something to do with GRUB. Not really worth it to mirror my opinion.
BTRFS and Proxmox? Risky choice.
That seems not normal, especially since this is not your Proxmox boot drive that writes logs or anything.
Just a small heads up. Proxmox (maybe every ZFS OS?) has not really implemented an automatic way to boot from one device, if the other failed. Think it has something to do with GRUB. Not really worth it to mirror my opinion.
I thought that was fixed? I just pulled one of the drives, booted fine. I then pulled the other drive and plugged the other back in. No problem booting from the second.
I think BTRFS still cannot boot from degraded mirror, or maybe I’m misunderstanding you.
I agree that I should probably move away from BTRFS, but I’m sure that ZFS is going to have the same write amplification problem.
I’m leaning towards the “unsupported” (I mean its Debian) mdadm raid 1, with dm-integrity. So I can just throw on LVM-thin. Might even run BTRFS again inside the VM.
This should solve the amplification problems, as no nested filesystems. But then I’m back at my current conundrum, if proxmox is right.
It still has a good interface for messing around with throwaway VM’s and LXC’s, and backups are very simple.
That certainly would help for read speeds thanks to ARC. But even 64GB is pretty low for “killer hardware”
Good to know, thanks for testing that. I thought that ZFS or TrueNAS was not ready for that.
It is also not worth mirroring ZFS for TrueNAS or OPNsense in my opinion, since you can just restore from a XML file. To bad that Proxmox does not offer that yet.
I would move away from BTRFS, because it is not officially supported and you will have a hard time asking for help. I don’t know why you experience such bad write amplification. It seams reasonable for a RAIDZ but for mirror? Do you have a very high blocksize?
My opinion: KISS. Keep it simple stupid.
There is enough stuff that can go sideways, no need to make life more complicated.