2x2.5GbE PCIe Cards--Intel needs more PCIe Lanes than Realtek? Why?

Hello,

tl;dr: I’ve read reviews that indicate that the Syba card actually works as advertised, so I’m more than a little confused about why the Intel-based card needs more PCIe lanes. What am I missing?

I’m still a newbie at building modern PCs and servers, so this might be obvious, but I find myself confused about the differences in the apparent bandwidth required/PCIe lanes used for two competing 2x2.5GbE PCIe cards.

I wanted a card using the intel chipset to make virtualizing OPNSense easier, and ran into this:

QNAP (PCIe 2.0x2 - Intel Chipset): https://www.bhphotovideo.com/c/product/1596833-REG/qnap_qxg_2g2t_i225_dual_port_2_5gbe_4_speed.html

Syba (PCI 2.0x1 - Realtek Chipset): Dual 2.5 Gigabit Ethernet PCI-e x1 Network Card

That Intel-based QNAP card is 2.0x2, my guess would be the Realtek-based one is 3.0x1. Remember that PCIe roughly doubles in bandwidth every generation.

1 Like

Nope, I was wrong. That said, PCIe 2x1 is roughly four gigabits full duplex, while 1000BASE-T and up is half duplex (so only send or receive, not both at the same time). (This is wrong, but doesn’t change the math much, see later in the thread)

That means that on the rare occasion you do full throttle in the same direction on both cards, PCIe will bottleneck (perhaps to ~3.5 Gbps combined). You can get full 2.5 Gbps on both ports if one is downloading and the other uploading. Depending on your use case you may be fine with the tradeoff.

Ed note: urgh, so many mistakes.

Thanks!

That makes much more sense. My motherboard only has 3 PCIe 4.0 slots: 1x16 lane, 1x8 lane, and 1x1 lane.

I needed a dual 2.5GbE card to use as a WAN/LAN (to switch) connection for a software router for my fiber connection. Right now I’ve got 500 Mbps full duplex, but theoretically, I could upgrade to 2 Gbps full duplex.

Most likely, I’d only ever upgrade the fiber connection to 1 Gbps full duplex. It’s hard to imagine what I could possibly ever do that would need more than that.

Internal traffic routing (non-internet) will be handled by a separate network switch, so I don’t think I’ll have any bandwidth issues?

Nah, normal traffic shouldn’t go beyond the switch, unless you have a lot of inter-VLAN traffic (although I don’t know much about VLANs)

1 Like

Yeah. I wouldn’t mind experimenting with VLANs, but not much.

I didn’t realize when I bought this logic board how annoying the lack of a full set of PCIe slots would be. I have a hoard of SATA onboard that I’ll never use, instead.

Still, it was otherwise exactly what I wanted, so I can’t complain too much.

I went mATX for my current build so I could downsize. That turned out to be a bad choice - there aren’t many cases which support my cooling and PSU while being small. And now that I want to get into homelabbing I’ll be missing the PCIe.

@johntdavis I was wrong about GbE and probably about higher speeds too.

The math turns out similar though - PCIe 2x1 will bottleneck, but not much. And for a firewall/router application when you have gigabit internet this will work just fine.

2 Likes

Back in 10Mbps/100Mbps days there was half/duplex vs full duplex depending on what cables you had and if you had a hub or a switch.

Gigabit onwards - it’s all full duplex.

PCIe 2.0 gets you 5GT/s full duplex, but there’s a bit of overhead in the protocol for headers and checksums, 4Gbps (or about 4.4) shouldn’t be a problem. Getting 4Gbps into the CPU (2Gbps from WAN+2Gbps from LAN) and simultaneously 4Gbps out of the CPU (2Gbps to LAN+2Gbps to WAN) should work ok on a single PCIe lane.

2.5 would be cutting it too close and you’d end up having to do some weird traffic shaping yourself if you wanted to avoid weird packet loss.

Thanks for that info!

So, it sounds like I need to manually limit the link speed to 2Gbps at the OPNSense VM? I’m not sure how that will work with my switch. It negotiates at 100M/1G/2.5G.

Your ISP will do this limiting for you.

Sure, strictly speaking, rates are always computed over some time period and there’s small buffers involved everywhere, and if you get deep into it, there’s no theoretical mathematical guarantee things will work out… but I don’t think this is something you’ll need to worry about in practice… Maybe in the future - if you want higher speeds, maybe you want to prioritize traffic somehow…or just get more lanes.

Thanks for all the info. I feel better about my purchase now (again). :stuck_out_tongue:

I would love to know why they put in 8xSATA ports when they must have known that almost anyone buying this board with the intent of doing any sort of RAID was going to get at least one HBA.

I think my perfect board would have been this exact one, but with only 3xSATA (one of which is a SATA DOM-capable port), and those other lanes repurposed for PCIe. Maybe AM4 Ryzen just wouldn’t allow that, or maybe they didn’t want to make the board bigger.

Keep in mind that most consumer boards have all but one PCIe lane connected to the chipset, which gets bottle-necked at four lanes in modern chipsets. So there isn’t much point to having more lanes (that’s where threadripper comes in).

If the slot is too small you can always use a riser, or do just do this one trick (That I don’t recommend).

Is there a particular brand of riser you recommend? There seem to be a lot of … OEMs … out there of questionable quality.

Replying to myself, unfortunately, but I’m left wondering why there isn’t an actual PCI 4.0x1 Dual 2.5GbE card.

I’d almost consider learning PCB layout/design just to try to build one.

AFAIK for short distances and PCIe 3.0 bandwidth, you can’t really go wrong with risers. The ones designed for bitcoin mining often have a 1x connection on one side, with a 16x slot on the other. If you have the slot there is also M.2 to PCIe adapters.

Replying to myself, unfortunately, but I’m left wondering why there isn’t an actual PCI 4.0x1 Dual 2.5GbE card.

It might be something dumb, like that intel card being the two chipsets with their own lanes. This would certainly make things like SR-IOV passthrough of individual interfaces easier/simpler.

That makes sense. It would take some … interesting logic … to get two 2.5GbE to share a single lane at anything close to full duplex, I suppose.

As to risers, I assume I’d be looking for something like this? 3-Packs GPU Riser PCI-E Express 1x to 16x Mining Riser Card Cable,60cm USB 3.0 Cable,PCI Express X1 to X16 GPU Ethereum ETH Mining Card 4 Solid Capacitors LED Indicator Power by SATA,Molex, 6pin PCIe - Newegg.com
EDIT: Well, not exactly this one. It uses a USB 3 cable to connect the 1x part to the 16x part. Unless that’s 10Gbps USB, that won’t work.

I’m guessing I wouldn’t need external power since 75w should be more than enough to drive a NIC. PCI 3.0x1 would be a 985MB/s (7.88 Gbps) lane, which would cover a single 2.5Gbps ethernet port at full duplex (5 Gbps), but not two.

I suppose I could get the dual port 2.5Gbps card, and lock one port down to 1 Gbps, for a total of 7 Gbps used, with 880 Mbps left over for overhead. I could use the 2.5 Gbps port for intra-LAN communications (router to switches), and the 1 Gbps port for the fiber WAN connection.

Doable? If I had a full duplex 1 Gbps fiber connection to the WAN, how much would I be likely to lose to overhead? I’d consider 750 Mbps more than sufficient for my needs.

I should note that my primary drive to switch cards is compatibility. The Intel chipset just seems to work better with various software packages.

That’s actually not too hard, you can get PCIe switches two allow two devices on one lane.

I doubt it’s actually USB, I think it’s literally connecting PCIe lane wires to USB wires.

PCIe is full duplex, so I’d expect two 2.5 Gbps ports to work just fine. One point is that communications can be bursty, especially with TCP offload. Which is why they may have more than one lane.

If you’re only ever going to get a 1 Gbps WAN for the foreseeable future, why not get a 2 port or 4 port gigabit card? A single gigabit port is full duplex, so with two you can handle your LAN and WAN fully. If you’re mostly downloading or uploading even a single port with VLANs would work fine.

You can use LACP with L4 hasing to distribute bandwidth over multiple ports which will help you get faster performance (with caveats, a single connection is limited to the speed of one port).

In 2-3 years the landscape may be totally different, PCIe 4.0 ethernet chipsets, 2.5 and 5 Gbps more common, etc.

2 Likes

That makes more sense, though I’m wary since all the ads specifically say “USB 3.0.”

My thought was that I’d have more headroom to avoid bottlenecks on inter-VLAN traffic if I had a 2.5 Gbps full duplex connection between my router and my core switch. I actually have a 4x1Gbps ethernet card I could use instead, if I wanted. I’ve also set up LACP on a QNAP switch before, so that’s not too intimidating.

Honestly, I hadn’t completely finalized my plans for the network topology when I made my original post here. I was mostly curious about what was going on with the difference in PCIe lanes. :slight_smile:

I’d happily upgrade to the 2Gbps fiber plan if/when I needed it, so future proofing is also a factor.

@cowphrase is correct, PCIe and quoted speeds are full duplex too. That one 3.0 lane gets you simultaneous ~8Gbps down + ~8Gbps up.

What switch so you have? Can it do at least a bit of L3 forwarding?