Return to

Why do these old dual port gig Ethernet PCIe cards use x4 and not x2

I found this post regarding the same subject: level1techs /p/ 125429

I have one of those same HP NC360T cards. It uses PCIe 2.5 GT/s x4. If you calculate that out, it equates to 1 GB/s total usable bandwidth (one way | so full duplex total would be 2 GB/s) - which is way more than needed for dual 1 Gbps

I have ordered one of these cheap adapters so I can use my card with a mobo that has USB 2.0 (not good enough for gigabit) and a PCIe Mini Card (commonly used for WiFi cards) as it’s only form of expansion

PCIe Mini Cards provide a single lane of PCIe 2.5 GT/s. If you calculate that out, it equates to 250 MB/s total usable bandwidth (one way | so full duplex total would be 500 MB/s) - which is exactly the same amount of bandwidth as dual gigabit. While I can see the purpose of using two lanes of PCIe 2.5 GT/s to deal with any overhead and having some wiggle room to NOT bottleneck the dual gigabit Ethernet - I cannot see the need for these cards using x4

Anyone have experience with using these old cards in limited PCIe configurations? Even if I lose a tiny percentage of bandwidth due to overhead that’s fine. I should still be able to get ~900 Mbps on each port (simultaneously) when used through this PCIe Mini Card adapter, right? I’m building a tiny pfSense machine with this. So I need gigabit speeds on both the WAN and LAN side

Maybe backwards compatible with older PCI express slots with less bandwidth? They’re designed for servers that typically don’t have severe bandwidth limitations like desktops.


If you consider that the NIC is based off the original PCIe spec, it makes a lot of sense. Each port needs 250MBps for full duplex (you need 125MBps for one direction) and since the card has two ports, you would need double that. Not even a 2x slot (500MBps) could provide that as the 1.0a spec has a 20% overhead due to 8b/10b encoding (600MBps total). If you plan on only ever using one port, then yes that adapter will work.


The card itself is original PCIe 1.0a 2.5 GT/s

Both PCIe and 1000BASE-T are full duplex allowing simultaneous transmit and receive using their own dedicated paths

So technically the single lane of PCIe 1.1 provided through a PCIe Mini Card can do 500 MB/s total (transmit and receive added together)

Two lanes of PCIe 2.5 GT/s (PCIe 1.0a/1.1) provide 500 MB/s both ways simultaneously totaling 1 GB/s total (transmit and receive added together)

Mini PCIe connectors are only x1. Also PCIe lanes are all multiples of 2 so 2^0=1, 2^1=2 and so on. They’re not as flexible as you might think they are.

1 Like

Yes. Where in my post(s) are you getting the assumption that I don’t know that? When I say ‘single lane’ I mean the full duplex transmit and receive lanes together. I thought I made that very clear

The PCIe Mini Card can do 500 MB/s total. Two GigE ports need 500 MB/s total (125 MB/s send + 125 MB/s receive + 125 MB/s send + 125 MB/s receive = 500 MB/s)

When I receive my adapter in the mail I’m going to test it and report back here with some iperf results to conclude this thread

I didn’t mean to assume that you didn’t know, in a bad way.

I think that the most obvious reason behind HP not trying to save a dime on those cards cutting the PCB to be 1x is that the same PCB might’ve been used for other cards. Maybe a dual or single 40Gbit, an SFP version, etc. That’s usually why you see “overkill” things on products.

HP NC360T is based on an Intel 82571EB chipset. You can look up the datasheet online for that controller and find that it is based on PCIe Rev 1 and that it uses 2 lanes per ehternet port for a total of 4 lanes. This limitation is based on the NIC chipset, not your motherboard which is operating in backwards compatibility mode for this PCIe device.

This is the conclusion I came to after doing more research on all the different generations of HP’s old enterprise network cards. For example: the HP NC364T is a quad-port variant using the same Intel 82571EB chipset (except it uses two of them). They’re very similar boards both using the same PCIe 1.0a x4 connection

Doesn’t make much sense why Intel designed the chipset with two lanes per GigE port. There’s plenty of bandwidth in a single lane of PCIe 1.0a (250 MB/s both ways | 500 MB/s total) to saturate a single GigE port

What’s interesting is that the HP NC364T (quad-port) uses the same PCIe 1.0a x4 connection as the HP NC360T (dual-port). Meaning the quad-port’s dual Intel chipsets are sharing the x4 PCIe lanes (even though the Intel datasheet states x4 PCIe per chipset). So the proof is right there - that there is no need to have multiple PCIe lanes per GigE port. So why did Intel do this - maybe for some reliability factor with redundancy? Does the Intel chipset ‘load-balance’ across the PCIe bus?

Here’s the final update I promised:

As I expected, a single lane of PCIe 1.0a can easily handle a dual GigE port card. I set up a pfSense box with one of the GigE ports as WAN and the other as LAN with NAT routing. I ran iperf test through it and was able to see speeds of ~930 Mbps

1 Like