What happens, when you run an Intel i350 NIC (network card) in PCIe Gen2 x1 mode

afaik, the popular i350 runs natively at PCIe Gen2 x4…

now, what happens when you try and fully load all 4 ports, while running the card in x1 mode? …does this even work, or do the drivers nope out? (on Linux?)

…sure, i can imagine a continued, but degraded state …but i’ve never actually tried it

…HOW does it fail? …i’d expect, it’d just affect the bandwidth - but i’d like to be sure.

  1. Maybe, depending on how the controller is designed it might require 2 lines for all ports to work. The NIC drivers dont negotiate about PCI Express lines but they might however except things to be allocated in a certain way.
  2. You’ll likely “just” see degraded performance and maybe weird time outs on the driver side trying all handle all traffic being pushed to it
  3. You have to test

My general suggestion would be to get proper hardware instead. If you want something cheap try a RockPro64 SBC or something…

I’m running a few in X1 mode with no issues. I’ve never maxed out all 4 ports though. It will surely just affect max bandwidth

Easily maxes out 2 ports though without issue

1 Like

Also just tried a genuine I350-T4 in a PCIe x1 riser and can confirm that it works just fine, too.

PCIe Gen2 x1 has 5 Gbit/s bidirectionally so it pretty much fits the needs of 4 x 1 GbE perfectly, even with a bit of controller PCIe overhead compared to the actually used bandwidth over the ethernet cables.

Historical note: Adaptec RAID controllers with PCIe x8 host interface were dicks about ten years ago, if they didn’t get 8 PCIe lanes during POST they prevented the system from actually booting with an error message.

But luckily haven’t seen anything like that from any other manufacturer since then.


Thank you, @FunnyPossum & @aBav.Normie-Pleb! :smiling_face_with_three_hearts:

The thing is, i’ve read a few man pages / docs, and watched a few videos. By now years ago, i ran an i350 on Linux, and used the SR-IOV virtual functions (VFs) in my home lab for VMs - iirc, that’s 8 per physical port, 32 VFs total. I’m not quite sure about it, i think the i350 has an internal 10gig “backplane”-ish thing; this might be what they call VMDq(?).

It’s all not quite conveniently presented, for home labs. I didn’t dive deep into the matter - obviously, if i did, this would be an XP report :wink: .

This SR-IOV 10gig “backplane” (for the VFs) needs to get the port-to-port(?) bandwidth somewhere - this might be why it’s supposed to run with x4 lanes.

Preliminary conclusion: it doesn’t break, but performance might be degraded.

I feel like it might be worth mentioning that Gigabit is typically actually 1.28Gbit and bidirectional, which is why the card has a PCIE2.0x4 interface instead of a x1 interface. If you communicate at full speed on all ports both ways, it would saturate a pcie 2.0 x2 interface fully, in theory. In reality, the interface probably has some overhead and would create a small bottleneck in that case. This I believe to be true.