Importance of pcie4 in a hypervisor machine?

Enterprise question here, not home-lab.

Looking at replacing Hyper-V cluster with new gear this year, four machines total.

HPE is having a fire sale on Gen10 EPYC servers, with the PCIe 3.0 motherboards… it is very tempting!

The servers will have SAS SSDs in raid-1 to boot from, but no other local storage, all the storage is provided by SAN. So no high speed flash storage local (no need for pcie4 nvme support)

The servers do strictly basic server virtualization, no heavy number crunching, no rendering, so they don’t need “compute accelerators or GPUS”.

I’m having a hard time seeing any need for pcie4, at essentially double the cost.

Only other difference is the ram clock. The G10 is clocked at 2933mt/s, the G10+ is clocked at 3200mt/s.

Am I missing something?

Right now the servers are DL380 G9 with dual 6c/12t Xeons, 256GB ram. I’m looking at going to the DL325 G10 with a single EPYC 7402P (24c/48t) and 512GB ram.

I have a Tyan S8030 board, which has pcie4, however I have no pcie4 devices, yet. Eventually I’ll be getting a nice graphics card to put in it, but beyond that there’s not really anything out there yet that gives extra performance over pcie4 when considering the extra cost in such devices. For nvme, I’d rather have the cheaper devices so I can get more in order to mirror them.

Sure, pcie4 nvme drives are technically faster in benchmarks, but you’d never actually feel a difference in normal use over pcie3. Intel has new pcie4 optane announced where pcie4 would make a difference if you need that special type of workload.

Make double sure it actually supports your chip though, there’s generally NO backwards or forwards compatibility with epyc generations.

1st and 2nd generation EPYC processors require separate BIOS images to be created and maintained. If two BIOS images were developed they could technically fit within a larger BIOS chip with some sort of switch logic to detect which branch to load during POST. However, given that the s8030 was just recently launched and the 1stgeneration AMD EPYC processors are quite old now and no longer the focus of most customers, we decided to not develop a BIOS branch capable of running the 1stgeneration EPYC 7001 parts.
Philip Maher
Tyan Product Planning and Marketing

One note, I believe on epyc processors there’s something special/funky that goes on when you bump from 2933mhz to 3200mhz, which could potentially impact performance in a negative way. I’ll see If I can dig up the sheet I found but haven’t had the time to understand yet.

You’re not missing out. I don’t know @Log situation before his purchase (and looking over the manual of the 8030 is impressive). Going from a E5- v3 to an Epic seems massive.
Epyc Pros:

  • Sapphire Rapids is being sampled today… gets released 2022

  • Better Power Consumption

  • More “Secure”

  • Expandible and Plenty of Bandwidth!

Plopping down $8000 (4 units) in 1U is a no brainer when an upgrade to Zen 4 or 5 based system later or Intel based further down with minimal effort is not a deal breaker. If the units are meant as a stop gap with significant and measurable increases in your workflow it’s a small price to pay until true NEX GEN arrives and plot your course thereafter. Manufacturers and design team are not prioritizing PCIe 4 anyway and actively working on PCIe5 and its stability and rollout.

hmm… disconcerning…

Good luck.

Found it. Lots of good stuff here, highly recommend everyone with an 7002 Epyc go through most of these:

Page 11

For throughput sensitive applications, to obtain higher IO throughput, Maximum Memory Bus Frequency can be set to the maximum allowed (3200 MT/s) provided your Memory DIMM hardware supports it. In this case, the Infinity Fabric Clock on these platforms will not be optimally synchronized with a Memory Bus Frequency of 3200 MT/s. This can a slight increase in memory access latency.

For latency sensitive applications, memory access latency can be reduced by setting the Maximum Memory Bus Frequency to 2933 MT/s or 2667 MT/s, in order to synchronize with the Infinity Fabric Clock. The appropriate Memory Bus Frequency for synchronized mode will depend on the AMD EPYC 7002 product family.

Basically because the Infinity fabric clock can only go so high, the memory speed has to be desynced from it at higher speeds, somewhat increasing latency if that matters for things you’ll notice latency issues with.

I do not think you are missing out on anything. In my opinion the difference in the speed of the memory is insignificant in a virtualized environment where you already have virtualization overhead.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.