Moving a full rack of ~2010 gear to newer hardware

Hi everyone,
I currently run a rack of servers in our Datacenter in Europe, and I have a bunch of decently old (2010) servers that I’d like to migrate to newer hardware.
I have a load of around 2000W for the following servers :

  • 3 x 2U SuperMicro, Opteron 6272, 8x 3.5 SAS Drives, between 48 and 64Gb of ram
  • 1 x 2U SuperMicro, E5-26XX v2, 8x 3.5 SAS Drive, 128Gb of ram
  • 6 x HP DL3[6,8]0 G7, Dell, with [E,X]56[2,8,9]0 2.5 SAS Drives, 64 to 144 Gb ram

I run a proxmox cluster on these, running VMs for our infrastructure and our customers (we are a local ISP). I have load of 5-10% CPU (on 188 CPUS) and 40% Ram (on 940Gb), with about 8-10TB of active storage. Most of the servers have currently raid controlers, with a mix of raid 10 and 5.

My plan was to replace all that by 3 Supermicro (or similar) w/dual Epyc " smallish " CPUs for a start (7313,7413), 6-8 NVME/U2 drives, 256 Gb ram per server. I’d still be running Proxmox (to migrate simply), moving to CEPH to benefit from the hardware failover.

I’d gain ~1400W which would reduce a bit my yearly power cost, and would also keep a decent upgradability margin for the future, looking at, hopefully, a 10 years run time for those hosts, as did our current infra.
I know that we cannot bet on that magic crystal ball to know what the future will be made of, but I’d be happy to have other views on what could be done, maybe a bit less costy, as those Epyc server are quite expensive still, even if it’s a long term investment.

Also note that :

  • we have some other NFS/ZFS Storage for backups and slower archive, which aren’t in that list
  • we wont move to vmware,
  • we maybe have one or two Windows VM, all the rest is linux,
  • we’d like to avoid raid controller as much as we can,
  • iops aren’t that high, but our machine are starting to show their age.

Any ideas, rotten tomato, thoughts, questions ? :wink:

Hope it’s a not too terrible 1st post :wink:

Cheers,

PorCus

4 Likes

Are you hunting for suggestions in individual components? So you want a fast controller to JBOD

I’d be more interested on other servers options that I didn’t thought of. Like, would used DL380G10 be something to consider as temporary measure ? Or maybe Supermicros with E26XX V4 ?
I’m also thinking about redundancy, in the server w/CEPH option, I could loose a server and still have my service running perfectly.

If smart hands are involved I would take that into account for the migration. I would also think about the internode network setup along the ideas of Redundant Array of Independent Nodes (RAIN)

2 Likes

2016-JAN-26 -- Ceph Tech Talks: High-Performance Production Databases on Ceph - YouTube for the ceph and redundancy topic

3 Likes

assuming you want to some day buy used higher end epyc cpus dirt cheap and get twice the cpu cores increase longevity ect, make sure the motherboards actually capable of handling the higher power with its vrm and bios. some of the lower end server boards cut corners not expecting you to need to power to run dual 64 cores or whatever. you can always upgrade the psu if needed but getting a new board might be a lot more challenging years down the road when this is a dead platform

2 Likes

Some suggestions for consideration:

AMD EPYC 3000 series SoC. Good entry into the EPYC platform, relatively cheap**, re-usable as a bare-basic fall-back or simply a backup machine once you’ve migrated to EPYC in full later.

Threadripper Pro. Essentially “EPYC Light”. Less cores, less RAM (4TB vs 256GB, IIRC) but also less $$$. May or may not suit your use case.

** Gigabyte and Asrock Rack have suitable boards. Gigabyte MJ11-EC0 has the 3151 SoC (8 core/16 thread), cost approx 450 euro (including shipping from Poland). Add 128 GB RAM (4x32GB kit, 550 euro), a 1TB NVMe drive for the OS and cache (100 euro), 4x8TB HDD (200-250 euro each) and suitable rack enclosure with PSU (300 euro-ish) for a total of under 2.5k euro.

HTH!

2 Likes

For a while AMD and HPE were pricing their latest EPYC servers very aggressively I mean, like, you can get a server with ram, cpu, psu, and 3 yr onsite warranty for less than it would cost to buy the parts and DIY something.

I’m not sure that is still the case, but it wouldn’t hurt to look. I don’t know who the HPE partners are in Europe, but check out the DL325 G10+ v2, great 1u server, single socket, I support six of these in my Enterprise. There’s also the DL385 G10+ v2, bigger 2u server, dual socket, lots of expandability.

1 Like

When you hear that name in your rack, it’s time
Got a price range?

1 Like

I was quite fond of DL3[6,8]0 up to G8, however the limitation mechanisms that came afterward on the drives you can use is a discarding factor. Also, I’ll need bypass/IT mode on the controller (I think it’s supported, we choosed ZFS/CEPH). In general, I’m not sure that the flexibility of the HP G6/G8 era is still that high with newer generations.

However, @Dutch_Master suggestion is quite interesting, I’d have more flexibility starting a TRP based machine, could be a intermediate step, however power usage looks way higher that Epyc.

I counted around 12-15k (CHF, would be roughly the same in $) per machine with the specs in my initial post, times 3, which is a bit steep atm. That’s why I’m exploring alternatives.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.