Hello!
I’ve been wanting to replace my old Dell R720XD server and my old X99-based server with a single much quieter much faster system.
I know Ryzen can kinda do ECC but its weird and hit or miss and theres a lot to think about, but then I found Epyc 4004 and the various boards from Asrock, Gigabyte, and Supermicro so it seemed like a nobrainer.
The R720XD houses almost everything but I wanted something a bit faster so I turned my old X99 FTW-K board from EVGA into a server that housed some projects. On the Dell, it housed my 45TB or so nas so I also needed to buy new drives and figure out how to plug in the 12 sata drives and the 2 intel datacenter SSD’s that host Proxmox.
I settled on the Supermicro H13SAE-MF with an LSI 9300-16i with 96 GB of Kingston ECC memory by way of 2x 48GB. Plugged in all 18 total drives (whew) and got to migrating. Tossed out a few of them so my pool is 4x 12TB and 4x 8TB.
The reason I chose this board over others is because of the full length 16x pci-e slots and the 4x slot. 4x slot has the 10Gb Connectx 3 card, while the bottom 16x slot has the HBA. The reason for the separation is really just for heat reasons.
The case is a Rosewill RSV-L4500U
This also houses a project I’ve been working on for a little while, CompareBench, and I’ve used that to send up a quick Cinebench score here: https://comparebench.com/benchdetail/101
Running Proxmox, these are the VM’s its hosting:
- Wireguard
- Plex
- Rclone (for offsite backup)
- Backend python API for CompareBench
- Separate DB vm for that project
- Sentry instance for bug tracking
- Windows VM for testing
Before anyone asks, because it seems a lot of people I’ve shared this with don’t quite understand lol: The devices in their specific slots are intentional. The HBA is only 8x so regardless of if theres something in the top 16x slot, it will only ever run at 8x anyway so theres no need for it in the top slot. The LSI 9300 generates a lot of heat, so I want it in the lower slot to not negatively affect the SSD that’s in there.