Ivy Bridge to Naples upgrade

Howdy all. I’m currently looking to upgrade by current vSphere host from a pair of Ivy Bridge CPUs to a single Epyc CPU. I’m looking for thoughts and considerations on this hardware.

My reasons for upgrading are: hardware refresh, moving from 2 CPUs to 1, better bifurcation/passthrough support (VERY hit/miss on the X9 series), and because I want to.

Existing host:

  • 2x E5-2697v2 (24 core/48 threads total)
  • Supermicro X9DRH-iF motherboard
  • 16x 16GB DDR3 ECC (256GB total)
  • Various PCIe NVMe storage
  • Intel X520 2x SFP+ NIC
  • LSI HBA connected to NetApp shelf
  • Rosewill RSV4000 case

This server hosts a number of things for me:

  • Domain services (AD/DHCP/DNS/Certs/etc)
  • Veeam backup + monitoring
  • ZoneMinder
  • Plex ecosystem
  • PRTG environment monitoring
  • a few game servers
  • TrueNAS (using NetApp shelf above)
  • Graylog
  • vCenter appliance
  • nginx reverse proxy
  • Wireguard
  • Influx/Grafana/Telegraf
  • SQL/MySQL
  • Jira
  • Various test servers

“New” hardware (this is where I’m open to suggestions and critiques):

  • Epyc Naples 7551P (32c/64t) (can be found on Ebay for ~$300, Milan/Rome are still super pricey)
  • 128GB DDR4 (probably, to start, depending on price)
  • Motherboard: undecided, looking for input. ASRock and Supermicro seem to be the “go-to”
  • Keeping X520 (2 ports, 1 passthrough to TrueNAS)
  • Same PCIe storage
  • Additional NVMe storage for passing dedicated devices through to VMs.
  • Same Rosewill case (i would love to get a nice 2 or 3U case, but those can get loud. But also open to suggestions if someone knows of one that would be a good fit.
  • A barebones kit would be cool, but probably a bit too expensive?

I know Epyc is overkill for me, but isn’t that the point? :slight_smile:

I’d love to know what y’all think. Thanks!

1 Like

Consider Threadripper (Pro, if you must). Essentially EPYC for workstations. Certainly has some downsides, but the upside is that TR chips are cheaper then EPYC. But maybe not anymore (I haven’t looked recently). YMMV!

I’d expect the single-thread performance on that Xeon to be better than that EPYC, so bear that in mind if you run anything which is sensitive to per-thread performance. Of the Naples family, only the EPYC 7371 has noticeably better single-thread perf than Ivy Bridge.

For boards, the Asrock Rack EPYCD8 seems to get good reviews (has 2x M.2 onboard which might be useful), and supports upgrading to Rome (but only running at PCIe 3.0 - similar to most boards designed for Naples).

Beware that many boards have what appear to be SFF-8643 connectors for SAS or U.2 NVME drives, but are actually SATA only, or have restrictions. E.g, the EPYCD8 I believe only supports SATA from those connectors, but the Supermicro H11SSL-NC supports 2x U.2 (PCIe 3.0 x4) from those connectors (but not the -C or -i variants) - check the topology diagram in the manuals before you buy :slight_smile:

PCIe bifurcation on EPYC (well, all recent AMDs) is great - no hassle at all, and generally every device (even from bifurcated slots) will be in its own IOMMU group, which makes for easy VFIO setup.

@xzpfzxds
Thanks! I had considered the single-thread performance but I don’t think that should impact me much. Nothing I’m running requires high single-thread performance, I think I’m more benefited by cores.

@Dutch_Master
thanks! Naples chips are actually not bad right now. A 7551P on ebay now is around $300.

Recently managed to acquire a 7401p for about 130€, so 300$ seem a bit excessive for 8 cores more.

Considering Naples was the competitor to Broadwell, i don’t think they will be worse in single threaded loads then the previous Rig.
One awesome thing with Naples is that you should be able to alter the PStates and “tweak / OC” the CPU to better suit your needs.

On the choice of the board.
SMs H11 series is honestly pretty unpolished or even bad.
I have no idea why i like them and have so many of them.
The H12 i have no experience with, but i hope those to be better, but those are more locked down as well.

The ASRack RomeD8 is a magnificent board that i don’t have any actual complaints with so far.
Naples is only supported up to Bios 1.30, and no ES cpus are supported.
And the mSas ports seem to indeed only support SATA.

EDIT: H11ssl is ok for the price and even better for less.

It was certainly the competitor in all factors except single-thread perf :slight_smile:

Check benchmarks which do not parallelize easily:

openbenchmarking.org - ffmpeg
= same as Sandybridge i5-2500k (3.7 GHz)

openbenchmarking.org - rust-prime
= slower than 2x Ivybridge Intel Xeon E5-2680 v2 (3.6 GHz)

openbenchmarking.org - crafty
= slower than 2x Sandybridge Intel Xeon E5-2660 (3 GHz)

openbenchmarking.org - scimark2
= slower than 2x Ivybridge Intel Xeon E5-2690 v2 (3.6 GHz)

openbenchmarking.org - compress-gzip
= slower than 2x Ivybridge Intel Xeon E5-2680 v2 (3.6 GHz)

huh, guess i swallowed the 1700x = Bradwell 6800 propaganda

Though, i guess 3.8Ghz is the difference.

7551p might not be the best choice
Hardly seems like an upgrade

i had a look at scimark2 and honestly, nothing i see there makes any sense to me.

First gen zen has low clock speeds is all, if they were clock for clock it’s faster
It will produce less heat and use less power but probably not enough to justify

yeah, right.
Well, you can set clocks on naples so you can equalize that .

Hmm, but its better then Rome, right ?
Well another Benchmark that i can’t make sense of.

I just switched from Threadripper back to Intel, and couldn’t be happier. Went from a TR 2950x to a dual Xeon 2683v4

The 2683 clocks a lot slower than the Threadripper, like half, but my VMs all run a lot better on the Xeon, including a little light gaming. Increasing memory on Xeon is a lot easier too, The my pricey x399 board doesn’t support “cheap” buffered ECC, but the Supermicro server board certainly does, so I went from 128G of ddr4 to 512G of ddr4.

well these comments have made me second-guess my plans. which i guess is good? i really love the appeal of moving to Epyc, but it sounds like that high of a cost may not be justified quite yet.

maybe i’ll ride on the Ivy Bridge for a bit longer. i don’t know much about the Xeon Scalable series other than that they’re expensive still.

1 Like

The age difference is pretty big though.

As it turns out they are Almost equal on single thread it look like from the very quick search I did in the car ion the way to work.

But then the epic have faster ram, more cache, more pcie break out, in general it is a newer platform that does one of the things OP wants, consolidates a dual socket into a single more power efficient system.

1 Like

There is another option to consider: Asrock Rack has server boards based on AM4 socket chips. This allows Ryzen-based CPU’s to be used in a server environment. Unfortunately, your existing setup doesn’t allow for an entry-level Epyc 3000 series SoC board, like the Gigabyte MJ11-EC0, as this only has one (1) PCIe slot. The 3151 SoC on that board would have suited your needs perfectly (8c/16t) and is relatively cheap (about US$500), requiring just RAM (and storage) to complete a basic system.

Well, “consolidates” from a point of view that two CPU packages are now one, but from a NUMA point of view, that 2 node machine is now a 4 node machine, each with 2 memory channels.

That change in NUMA topology brings with it many considerations for performance. NPS1 on Naples is tricky to use well even when the ACPI SRAT table is accurate, so it’s usually best to use NPS4 there. Rome/Milan handle NPS1 well by virtue of the IOD consolidating all memory/PCIe/xGMI links on the same fabric - at least well enough to “fake” it being a single node.

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.