Should I be concerned about PCIe lanes?

I’m building a new home server and I’m a bit concerned about PCIe lanes. I’m targeting 10GbE and I’m not really sure how that works, since from what I know the chipset is connected to the CPU with a x4 lane, that x4 lane would be maxed out anytime a file transfer was happening over the network at those speeds.

My setup was going to be something like this:
6x 14TB drives, ZFS Raid z2

I’m not really sure how PCIe lanes work, but from what I’ve read SATA connections use chipset lanes, that means there would be one hell of a bottleneck on the chipset when doing a file transfer, since the x4 lane would be saturated by the drives attempting to send data to the CPU and then sending the same data back out that x4 lane to the network.

It seems you would be able to get around this by setting up your PCIe slots to run in x8/x8 and hooking up an HBA so that the drives do not go through the chipset, is this correct? Is there anything I’m misunderstanding? I’m a little concerned going forward since it seems on most mobo once those 16 lanes are used up you start having to sacrifice things, e.g. your 3rd PCIe slot will be disabled if you plug in a 2nd NVMe, which would once again be going through the chipset so we run into the file transfer issue saturating the entire chipset lane that I talked bout earlier.

1 Like

Four lanes of PCI express 4 is equivalent to nearly 8 gigabytes per second. 10gbe is measured in gigabits, so 1 gigabyte per second approximately at the most.

You don’t have to worry.

2 Likes

Thanks, I knew I was overlooking something, when I checked the bandwidth of a x4 lane I read 8Gb instead of 8GB. There’s a lot more room in the x4 lane than I was thinking and that puts all of my issues to rest.

1 Like

Your concern is not unfounded.

Generally, current technology chipsets offer tons of connectivity in terms of NICs, USB ports, SATA ports, m.2 ports, extra PCIe lanes, etc. All of this is provided with the idea of flexibility in mind, but all the connectivity cannot possibly be used concurrently because of the limited connectivity between chipset and CPU.

Most people will never notice, but home labbers will run into this and similar limitations quickly.

Another thing to look out for is if the BIOS/chipset will allow assigning different interrupts to individual SATA ports, allowing to maximize utilization in a home lab scenario.

1 Like

If you want to saturate that 10GbE connection, you will need a cache; either a smaller one on 60 GB or so in RAM, or a larger TLC m.2 drive on 1-2 TB. Make sure the m.2 drive is on the same place as the 10GbE!

I can also recommend you put those SATA drives on one of these babies, in fact buy two in case you expand your 6 drive array:

Also interesting is this, which will allow you to do a OS install and a cache NVMe and leave the other m.2 slots free for storage:

Now all we need is a motherboard that allows you to bifurb an x16 into an x4/x4 with two m.2 slots directly connected to the chipset. Oh, and has ECC support. Unfortunately AMD AM5 supports the former but not the latter (though I hear a fix is on the way - if you can wait a few months maybe will be resolved?), and Intel Gen 13 supports the latter on their W680 boards but not the former!

I guess it’s time to see what EPYC or Xeon has to offer…

1 Like

Oh, that Synology network card is pretty interesting, kind-of of steep for what it is though at $250.

Is the current build, as seen by prices, most of this stuff I already own.

Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   39 bits physical, 48 bits virtual
CPU(s):                          4
On-line CPU(s) list:             0-3
Thread(s) per core:              1
Core(s) per socket:              4
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           60
Model name:                      Intel(R) Xeon(R) CPU E3-1225 v3 @ 3.20GHz
Stepping:                        3
CPU MHz:                         3484.922
CPU max MHz:                     3600.0000
CPU min MHz:                     800.0000
BogoMIPS:                        6385.01
Virtualization:                  VT-x
L1d cache:                       128 KiB
L1i cache:                       128 KiB
L2 cache:                        1 MiB
L3 cache:                        8 MiB

Is the server I’ll be upgrading.

1 Like

The X570 Taichi no longer supports ECC memory, you would need something like the ASRock Rack X570D4U-2L2T to enable that.

Are you sure?

I was reading that here, but I don’t see any reason I wouldn’t be able to downgrade the bios, test the memory and then upgrade the bios after that. It seems it’s still supported just the memory error injection feature is missing.

Also the X570D4U-2L2T has AGESA PI version 1.2.0.7, which is after the version where that feature was said to have been removed.

Yes, you can run ECC memory in a non-ECC system. You will just pay more for a feature you can no longer use.

Also, downgrading the bios means your CPU might not be supported since the Taichi was released for the 3000 series.

Yes but reading that same thread:

However, the representative reassured ECC is setup to auto detect and applies regardless of settings available in the BIOS, AGESA or not .

This says that ECC does still work, it’s just there are no longer bios options for it, so you really only need to downgrade the bios for memory testing.

Well, feel free to take the chance - worst case ECC will not work but the ram sticks will, and ECC is nice to have but not mandatory. :slightly_smiling_face:

Yeah but since I’m planning to use ZFS I’d really prefer to have it. That and paying twice as much for a board with less expandability is pretty shitty.

You need to be aware of the PCIe lanes used. The 5900 has only 24 PCIe lanes, so you need to do a bit of homework. Generally, graphics card usually wants 16x, Mellanox card wants 8x, an M.2 slot wants 4x, and I/O chipset wants a few (3 or 4x?). That adds up to more than 24x.

I suspect you are OK. At the very least the Mellanox card should be fine with 1x or 2x PCIe 3 (and coming direct from the CPU, not through the I/O chipset). Your graphics card is likely fine with 8x (depends on your usage, and again, direct from the CPU). You might have to do some manual configuration of PCIe slot lane assignments in the BIOS.

This is top-of-head, and you want to read the motherboard manual to check specifics.

As far as chipset SATA and chipset PCIe lanes, you are right to be concerned, but with spinning drives are OK. On other rigs, with SATA SSDs, I have seen the chipset bottleneck at 2-3GBps (4-6 drives). But spinning drives need less. I have a very similar 5 spinning drive / X570 / Zen 3 setup. With striped LVM volumes, seeing near 1GBps - so only really half a PCIe 4 lane worth.

Also, you have not mentioned an M.2 drive. Guessing that would be a big win on a ZFS setup.

I grabbed a P1600X 58GB a few weeks ago when they were on sale, and I have a 2TB SN850 that I was going to setup as a cache.

PCIe 3.0’s 985 MB/s
PCIe 4.0’s 1.97 GB/s
PCIe 5.0’s 3.94 GB/s
Since you want the computer to be dual use, only give the GPU an x8 connection, that way you have enough bandwidth to support the rest of your hardware.
x4 PCIe is enough bandwidth to support a dual 10Gbps card.
With your remaining x8 slot you can give your SAS, SATA, or NVME card enough bandwidth to operate.
Remember you probably still have a M.2 slot that can either hold a SSD, or a m.2 10gbe card. ie:

or a 6x sata adapter.
Your bridge chip may also support several m.2 slots that you can use that for further expansion beyond your boot ssd.

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.