4x lanes of Gen4 will give you at most ~63 Gbit of bandwidth.
There is some overhead, but there is probably enough bandwidth available for two 25Gbit ports to operate at the same time.
How well it works is going to depend on implementation.
I have had decent luck running more recent Intel NIC’s at below their max PCIe bandwidth. They seem to figure out that they have less bandwidth available to them and distribute it well.
I have had less luck with older adapters. Years ago I bought a set of ~2008 era 10gig INtel 82698EB NIC’s. The cards were 8x Gen2 cards and they worked splendidly when all 8 lanes were connected at Gen2 speeds. Years later I tried to repurpose them in a 4x slot, and it did not go well. In theory they should have had more than enough bandwidth for a 10Gig port at ~16gbit, but they seemed really confused at not having all 8 lanes available to them.
From memory, one of the directions (downstream or upstream, can’t remember which) worked perfectly at near full speed, whereas the other direction only functioned intermittently and when it did, was very slow.
So I guess my take is, it depends.
I think most modern NIC’s are like the more modern Intel NIC’s I have tested, in that they just seem to figure it out, and use the bandwidth they have available to them effectively, but I haven’t tested all modern NIC’s, so there is always the risk that some of them shit a brick like my old 8259EB’s did.
I also noticed you were discussing a chipset slot.
Be aware that all bets are off with chipset slots.
They utilize heavily congested shared bandwidth bandwidth upstream using integrated PCIe switches of some sort.
My experience with AMD chipsets is that they have generally worked pretty well as long as you don’t overload them. Recent consumer models have 4x Gen 4 lanes that connect the chipset to the CPU, and as long as you don’t try to push too hard (and keep in mind that these lanes are shared, so you don’t load integrated devices on the motherboard too hard) the experience can be quite decent. There will always be a latency penalty when it comes to chipset lanes, but for something like a NIC, the latency over the network will be orders of magnitude greater, so the difference won’t be perceptible.
I have had less of a good experience with the last few generations of Intel chips when it comes to chipset lanes. Instead of connecting to the CPU using PCIe lanes, they use a special purpose “DMI” link that while based on PCIe doesn’t appear to work as well (at least not in my testing).
I don’t fully understand the distinction, but my high level understanding is that AMD uses standard PCIe (Gen4 x4) to connect the chipset, and has some form of PCIe switch (like a PLX device) integrated into the Chipset to connect all of the on board devices, and provide chipset lanes. Intel’s DMI links are essentially PCIe lanes, but they have done something proprietary to them to encapsulate and route all sorts of stuff, like legacy interrupts and other things) and I don’t think they use a straight up PCIe switch equivalent like AMD does. The result can be that the performance is pretty bad in many applications.
The chipset connected PCIe lanes on my Xeon E 2314 (Rocket Lake) on my SuperMicro X12STL-F motherboard (C252 chipset) are absolutely horrible bordering on unusable.
Now granted, these are 8 lanes of DMI Gen3, vs I think 8 lanes of DMI Gen4 on the x790, so the Z790 should probably be slightly better, but still just be aware that chipset lanes can be rather unimpressive performance wise.
I don’t know why Intel’s DMI setup has worked worse for me than AMD"s PCIe setup. ON paper DMI has quite a decent amount of bandwidth, and on my SuperMicro X12STL-F I am using very little of it (only on board device I am using is a single Gigabit ethernet port) but the difference has been quite huge in my application. I originally tried running a mirrored pair of Optane drives on this board, one on th eprimary M.2 port that connects to the chipset, and one on a secondary slot that connects through the CPU, and performance was absolutely awful until I switched them both to proper CPU lanes using risers.
Here is the diagram for this board:
So when one of the Optanes was in the M.2 slot off the chipset, and one was in SLOT7 connected to the CPU, the performance was awful. These weren’t even high end Optanes. We are talking 2x Gen3 each. In this configuration I got like a quarter of the expected performance.
But when I moved the Optane drive from the chipset m.2 slot to a second riser in SLOT5, they operated as expected at full speed.
IN this test, almost nothing else on the chipset was loaded up except maybe the BMC.
So just keep in mind that using chipset slots can be unpredictable, and can cause lower performance than expected, especially on Intel implementations over DMI.