Hi everyone,
I currently run a rack of servers in our Datacenter in Europe, and I have a bunch of decently old (2010) servers that I’d like to migrate to newer hardware.
I have a load of around 2000W for the following servers :
- 3 x 2U SuperMicro, Opteron 6272, 8x 3.5 SAS Drives, between 48 and 64Gb of ram
- 1 x 2U SuperMicro, E5-26XX v2, 8x 3.5 SAS Drive, 128Gb of ram
- 6 x HP DL3[6,8]0 G7, Dell, with [E,X]56[2,8,9]0 2.5 SAS Drives, 64 to 144 Gb ram
I run a proxmox cluster on these, running VMs for our infrastructure and our customers (we are a local ISP). I have load of 5-10% CPU (on 188 CPUS) and 40% Ram (on 940Gb), with about 8-10TB of active storage. Most of the servers have currently raid controlers, with a mix of raid 10 and 5.
My plan was to replace all that by 3 Supermicro (or similar) w/dual Epyc " smallish " CPUs for a start (7313,7413), 6-8 NVME/U2 drives, 256 Gb ram per server. I’d still be running Proxmox (to migrate simply), moving to CEPH to benefit from the hardware failover.
I’d gain ~1400W which would reduce a bit my yearly power cost, and would also keep a decent upgradability margin for the future, looking at, hopefully, a 10 years run time for those hosts, as did our current infra.
I know that we cannot bet on that magic crystal ball to know what the future will be made of, but I’d be happy to have other views on what could be done, maybe a bit less costy, as those Epyc server are quite expensive still, even if it’s a long term investment.
Also note that :
- we have some other NFS/ZFS Storage for backups and slower archive, which aren’t in that list
- we wont move to vmware,
- we maybe have one or two Windows VM, all the rest is linux,
- we’d like to avoid raid controller as much as we can,
- iops aren’t that high, but our machine are starting to show their age.
Any ideas, rotten tomato, thoughts, questions ?
Hope it’s a not too terrible 1st post
Cheers,
PorCus