Linux, Intel X550-T2 10 GbE, and Jumbo Frames

I’ve upgraded my network connection from my workstation to my FreeBSD NAS. Both sides have an Intel X550-T2 (copper, 10 GbE) interface connected by a simple Cat6a cable between them (no switch). These cards support 9014 byte jumbo frames. Statically (obviously) assigned IPv4 (only) addresses on each side.

With Windows on my workstation and FreeBSD 11 on the NAS side, 9014 byte frames work perfectly. With Fedora Linux 27 on my workstation (latest, all updates), 9014 byte jumbo frames cause the card to just repeatedly try to connect to the remote side with no success. No pings, nothing. If I switch the MTU to 1500 bytes on the Linux side, no problem. As soon as I pick anything besides 1500 bytes, badness. Put it back to 1500 bytes, success. This is all very repeatable. :frowning:

I’ve never seen anything like this.

Anyone have any ideas to troubleshoot?

9014 is a strange number for MTU. You usually put 9000 to indicate the payload and then ethernet header and/or VLAN tag is added on top of that to make it 9018 or 9022 (if I remember correctly).

I’m not sure if that’s what’s causing your issue though…

9014 between Windows and FreeBSD works just fine. As I recall, on Windows, the jumbo frame sizes are in a dropdown.

For fun, I did just now try 9000 for Fedora <-> FreeBSD and it doesn’t work either. Running ifconfig periodically, I see the interface going up and down with the correct IP address and MTU size for a moment, then back to no address. Switch things back to 1500, everything works fine.

What’s odd is that, on some rare occasions, after playing around changing things (interface up/down, changing MTUs, whatever), I’ll get things to work at a higher MTU. But, as soon as I reboot, no luck.

I’ll keep messing around…

1 Like

:man_shrugging: Try swapping the cable? You’re using Cat6A, right?

Other than that, all I can think of is drivers…

You don’t need jumbo frames. Don’t use them. That is my advice.

Solved.

The Fedora 28 upgrade with a newer kernel fixed this, so presumably something was fixed in a later kernel.

And, certainly, I don’t need jumbo frames, but if you’re doing large transfers over 10GbE (which I’m doing), jumbo frames will easily give you another 25-50% bandwidth over non-jumbo frames (depending upon application/protocol).

False…

SIgnificant uptick in NFS performance…

Been suffering through this on CentOS for a while. Going to try the latest 4.x kernel again and see if its fixed.

Unfortunately, that user has been banned for 1000 years and cannot offer a rebuttal to your statement. However, could you please provide evidence of the performance with NFS with/without jumbo-frames for a comparison? Thanks.

At the moment I can’t use them as I’ve backed out to 1500 until I can safely upgrade my NFS server to a 4.x kernel. I ran into issues with one of my drivers doing that (16port HBA adapter).

Short version is large file moves off to/from fast array were ~650-700ish vs 900-1000ish MB/s if memory serves.

Once I have things working again at 9000, I’ll re-test. Doing a bunch of other things right now (syncing/mirroring/re-orging of T’s of data, so it might take a while).

FWIW - there was an RHEL bug filed for this ages ago, but as far as I know it the fix was never back-ported to the 3.x kernel line.

@esc BTW where did you buy your NICs? I’m seeing clones/fakes on Amazon unfortunately…