Chipset slots and Gen4 NVMe, am I really bottlenecked?

Hi,
I have a Gigabyte Aorus Master X570 v1.1 board, I’ve currently got one Gen4 drive installed in the primary slot and one Gen3 drive installed in the second slot. I was going to replace the Gen3 drive with a new Gen4, but am I going to be bandwidth limited? What about latency?

This machine does everything at the moment, gaming and heavy VM use.

Assuming that I’m running the 2.5 gig NIC and at least one 10 gig USB device, am I pushing the limits of that chipset link?

The features table shows 10g USB is available directly from the CPU, don’t know how you find out which port that is, but that would move some bandwidth off the chipset link.

What peripheral do you have that uses 10gigabits of USB bandwidth?

You have two PCIe x16 slots that are direct to the CPU and share bandwidth. If you feed your GPU with an x8 link, you could put your 2nd NVMe in the other CPU x16 slot, instead of the chipset linked slot.

Aside from synthetic benchmarks, I don’t think you would see any bottleneck in real world usage, keeping your existing configuration.

2 Likes

If you are doing large sequential transfers, then sure PCIe 3.0 is a bottleneck. But most people don’t do enough of those to care about the difference, and if they did, they’d get an enterprise SSD for consistent performance rather than burst.

The closer you get to small IO, reading writing/writing random 4K blocks, the harder a time any drive will have and even NVMe drives won’t get anywhere near bandwidth limits of PCIe 3.0.

Consider the benchmarks here: PCIe 4.0 vs. PCIe 3.0 SSDs Benchmarked | TechSpot
These aren’t great benchmarks, since they ignore steady state and are really just measuring the RAM/SLC caches on the drives, but you are not going to reach steady state performance issue with your workload anyway beyond the very rare large sequential transfers that happen to be bigger than the drive cache.

You would be hard pressed to notice a difference even from a SATA SSD.

2 Likes

Think I found it, there’s 4 ports next to the 2.5gbE port on the back of the machine.

image

Okay, okay, I don’t really need my USB drives going that fast, but the USB NVMe enclosures are great

I don’t have good benchmarks, but there definitely was a difference with IO in a Hyper-V virtual machine on a SATA SSD, Gen3 and Gen4 drive. To be fair, the biggest difference came from switching the Samsung NVMe driver (on my previous machine) to the Microsoft driver and vice-versa. Haven’t tested on Windows 11 yet.

1 Like

Huh? The diagram in my first post shows one x16 and one x8. I think one slot is blocked by the massive GPU, anyway.

I’ve noticed hyper-v VMs booting noticeably faster after upgrading from a WD Black 3.0 NVMe to a Samsung 980 Pro NVMe on my x570 board, but I mean, it made everything faster. I didn’t think I would notice, but definitely did.

2 Likes

Physical slots, not electrical.

1 Like

Right, maybe a PCIe adapter with drives in that 8x slot?

Found a block diagram for that model. It doesn’t specify a revision but I seriously doubt the design changed drastically between 1.0 and 1.1. The slot labeled “M2B” doesn’t share lanes with any other devices so it should run full speed even under load.

It’s sharing bandwidth of an PCIe 4.0 x4 link to the CPU from the chipset, and all of the external I/O peripherals. M2A is direct to the CPU, along with two of the three PCIe slots.

Found this image on the LTT forum


source: AM4/X570 PCIe Lane Count - CPUs, Motherboards, and Memory - Linus Tech Tips

Edit: Was thinking there were more than 4 lanes from the CPU to the chipset. Disregard

That is really annoying, here’s hoping I can have some more lanes with PCIe Gen5.

As speed increases I expect fewer lanes available. It gets harder and more expensive to build the circuit board as frequency increases.

AMD initially advertised their AM4 platform as supporting up to 40 lanes, in competition with Intel Sandy bridge and Ivy bridge, but then they chickened out.

Intel reduced the number of lanes on newer platforms too.

I miss the good old days of reasonably priced HEDT systems.

1 Like

Since the bandwidth doubles with every PCIe generation, manufacturers could keep the amount of lanes or lower them slightly and use less lanes per interface. When a GPU runs fine on PCIe4.0x8 it could run the same on PCIe5.0x4! Many addon cards, like controller cards, would probably only need an PCIe5.0x1.

@voltagex In case you really need the bandwidth to more interfaces you can look at Intels Alder Lake generation of CPUs and Processors. They have a wider connection to the chipset, which in turn means you can utilise more devices, that are connected through the chipset, simultaneously. AMDs next generation platform might also have more bandwidth to the chipset, but on AM4 your are limited to PCIe4.0x4 unfortunately.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.