So recently reinstalled Proxmox onto my server with Native Docker and zfs running and it is working great. Not using any VM’s atm because docker pretty much doing everything I need… atm, but I will want to add a VM rendering machine for blender with a 6800XT passthrough… never got the card working in blender on linux I also tend to run some Game servers through Pterodactyl as need arises, and for friends.
So the thoughts:
I have a dual Xeon E5-2670 (2.6ghz) with 64G DDR3 ECC memory, and it is currently on about 2-3% usage. This is fine for pretty much everything except when it comes to game servers and windows. Things like Space Engineers, threads matter little, its all about Single Core speed. It currently has an Nvidia GTX1070 that is used by certain dockers for acceleration.
I have a Ryzen 4650g PRO, and was thinking of replacing the motherboard of the server with this and 128G of higher speed memory. I have found ECC a massive pain to get hold of and tried it before with this chip in a mobo that supported ECC and had problems. Couldn’t get hold of the EXACT ones the board manufacturer specified, so gave up and went with standard DDR4. BTW, in Thailand, so the 2nd hand market for server stuff is not really there and the cost of shipping plus import duties are horrific.
So what are your thoughts on this? Going from 32 threads to 12, but almost double the speed, faster and more memory and much newer architecture. Considering the usage and load of the server, I would think it would only effect the gameservers, but make everything else snappier. I have heard Multiple things about non ECC memory and ZFS and am still undecided for home use, but not being able to get it makes it less likely that I will be able to use it.
Migrating from a dual-Xeon to a Ryzen 4650g PRO may feel like a step down, but you may find, as you expect, that it actually is a step forward. Depending on the mobo you choose/have you may find decent power consumption savings.
I have been running home servers for > 20 years without ECC memory. Despite all the chatter on this forum I cannot recount actual issues related to this.
To the horror of many I even run a bleeding edge distro (Fedora) on it. It typically runs months at a time and requires reboots to pick up newer kernels.
To me a home server is valuable, but not critical infrastructure. Inconvenient if/when failing, but no biggie for a couple of days. Assuming you have a similar expectation for your setup you should be fine.
You probably aren’t using all the 32 cores anyway and the snappier 12 cores, with the faster RAM, will definitely be felt. ECC is not mandatory for ZFS and there have been people who’ve run it without it without issues. I have an ARM SBC with 4GB of non-ECC RAM on which I’m running ZFS on 2 pools (10TB spinning rust and 2TB flash). Runs like a charm. They aren’t exactly snappy, but get the job done for 1 user.
off-topic pool numbers
$ dd status=progress bs=1G count=2 if=/dev/urandom of=/mnt/remote/rust/test.img
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 9 s, 236 MB/s
2+0 records in
2+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 46.8865 s, 45.8 MB/s
$ dd status=progress bs=1G count=2 if=/dev/urandom of=/mnt/remote/flash/test.img
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 9 s, 234 MB/s
2+0 records in
2+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 46.817 s, 45.9 MB/s
$ dd status=progress bs=1G count=2 if=/dev/urandom of=/rust/test.img
0 bytes (0 B, 0 B) transferred 2.410s, 0 B/s
0+2 records in
0+1 records out
100663296 bytes transferred in 2.644935 secs (38058895 bytes/sec)
$ dd status=progress bs=1G count=2 if=/dev/urandom of=/flash/test.img
0 bytes (0 B, 0 B) transferred 2.162s, 0 B/s
0+2 records in
0+1 records out
100663296 bytes transferred in 2.340275 secs (43013446 bytes/sec)
38 MB/s to 46 MB/s. Top are NFS, bottom are local. Those are writes. For reads, I get 59 MB/s on rust, 60 MB/s on flash on NFS and 262 MB/s on rust and 294 MB/s on flash locally (yes, the writes are trash, but I don’t write that often).
I’d say just go for the Ryzen build. Also agree you should see lower power consumption. I don’t get how people can run servers using 150 - 200W at idle and not pay money through their nose on electricity bills. That’s why I moved to ARM SBCs.
Currently connected to the UPS are 2 switches. Router and an OrangePi 4LTS, and the server… with about 6 spinning rust disks… it does idle at about 200W, so lowering that would be great… not really an issue during the day as I have solar, but every little bit helps.
Basically what sold me on not HAVING to use ECC was a video one of the zfs developers talking about how it was not necessary.