Clean Fedora install not defaulting to gigabit?

How is everyone? I recently started using the iperf3 tool to check my networking equipment’s performance. Much to my dismay, the results were capped at around 85 megabits. I had a clean install of fedora on both systems and they both had a gigabit NIC. Upon further investigation I discovered the network devices configuration file inside of /etc/sysconfig/network-scripts/ and changed this line ETHTOOL_OPTS="autoneg off speed 100 duplex half" to this ETHTOOL_OPTS="autoneg off speed 1000 duplex full" on both systems. After rebooting the performance was all the way up to 850 megabits. Is this normal for most Linux distributions? I find it unusual that I would encounter the same problem on two unrelated PCs from a decade apart (X99 motherboard and Thinkpad T420).

snap1

1 Like

I encountered the same problem on kubuntu some time ago and it seems normal at least on that distro. But I agree, it makes no sense at all.

1 Like

Because most people don’t have 1GB network hardware?

My motherboard has a 1GB NIC, but my router/switch is only 10/100.

‘Auto’ exists.

Just did a iperf3 test on my 10gb fiber network and things seem find (though I’m wondering if those retries are an issue or something to be expected)

Client is Fedora 30 KDE spin
Server is OMV4 (debian)

iperf3 -c <server address> -f g
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1012 MBytes  8.49 Gbits/sec  707    778 KBytes       
[  5]   1.00-2.00   sec  1.04 GBytes  8.92 Gbits/sec  644    710 KBytes       
[  5]   2.00-3.00   sec  1.03 GBytes  8.82 Gbits/sec  682    962 KBytes       
[  5]   3.00-4.00   sec  1.06 GBytes  9.09 Gbits/sec  280   1018 KBytes       
[  5]   4.00-5.00   sec  1.04 GBytes  8.97 Gbits/sec  577    508 KBytes       
[  5]   5.00-6.00   sec  1.05 GBytes  9.02 Gbits/sec  132   1004 KBytes       
[  5]   6.00-7.00   sec  1.05 GBytes  9.03 Gbits/sec  126    868 KBytes       
[  5]   7.00-8.00   sec  1.04 GBytes  8.91 Gbits/sec  335    847 KBytes       
[  5]   8.00-9.00   sec  1.05 GBytes  9.00 Gbits/sec  554    875 KBytes       
[  5]   9.00-10.00  sec  1.03 GBytes  8.83 Gbits/sec  545    727 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  10.4 GBytes  8.91 Gbits/sec  4582             sender
[  5]   0.00-10.00  sec  10.4 GBytes  8.90 Gbits/sec                  receiver

Here’s the contents of ‘/etc/sysconfig/network-scripts/’ (The Fedora client) which were the default.

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp66s0f0
UUID=<XXX>
ONBOOT=yes
AUTOCONNECT_PRIORITY=10
ETHTOOL_OPTS="autoneg on"
ZONE=home

No idea what the equivalent with debian is.

Happend to me as well with F31 plasma spin.

@Riotvan
Interesting. I was using the same spin as well.

@noenken
Did not realize I could just set it to auto-negotiate. I’m surprised to see the default config on Fedora 30 KDE to be different from 31.

I think i had it on a F30 mate spin as well but not sure.

1 Like

I think most people do have gigabit Ethernet on their routers and switches these days. And even if it was a 100 Mbps switch it would still autonegotiate.

I did have a router that required manual speed configuration. But that was in 2003, for IDSL, which I doubt anyone has even heard of lately.

1 Like

It’s a bug on Fedora. Centos works fine, as does Ubuntu.

No kidding. This has been a rough release. I also had to change the grub menu to fix Docker with:

sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"

1 Like

Yeah this is a bug in some kernels or something. I tried setting it manually to gig on a release of F30 after noticing it was either negotiating or defaulting somehow to 100 meg.

Caused massive issues. Left it alone and it got fixed in an update and just autonegotiated gig properly. YMMV.

Dude. Most people have had gig for a long time now (including any home user who got a new router in the past decade).

Many people have gig plus wireless APs/routers, and you need a gig switch to plug them into (unless you want to choke them), or they even come with one. Wireless N (which is >100 megabit) has been a thing for over 10 years now.

If you’re still rocking 100 meg, you’re killing your data transfers between machines, and a gig switch is under 50 bucks. I can get a brand new 5 port un-managed gig switch for $19 even in Australian pesos. With Australian tax.

Gig ethernet is still massively limiting if you have SSDs in your machine, but 10 gig is still expensive :-1:

Full auto negotiation is not a required part of the 100 megabit and prior standards. It’s only “standard” (vs. optional) on gigabit ethernet and up (it is “required” to be gigabit ethernet 1000baseT compliant). Many devices do support it on 100 megabit, but not all.

Some Cisco 100 meg and older hardware for example is strictly standards compliant and doesn’t do full auto negotiation because it wasn’t required as part of the 100 megabit ethernet standard.

That said, this is most likely due to the fedora/linux bugs described above :slight_smile:

ref: https://en.wikipedia.org/wiki/Autonegotiation

(yeah, i’ve been bitten by autonegotiation on 100 meg before :smiley: )

Probably for Australia. In the US it’s silly not to have a hard look at it.

Single port sfp+ pci-e nice: $30

Finisar Transceiver: $8

OM4 LC cable: Cost variable due to length
On amazon CableMatters is good for patch cables. FiberCablesDirect has some good options for long runs. OM4 cable “supports 10 Gigabit Ethernet at lengths up 550 meters and it supports 100 Gigabit Ethernet at lengths up to 150 meters.” meaning you don’t have to run new cable to do an upgrade.

If you don’t want to mess around with direct/roundrobin connections and want a switch a switch, mikrotik is the most affordable at around $130. It should be noted it is NOT consumer friendly, and reading how to setup is mandatory.

If you don’t mind direct/roundrobin connections and have a real need for speed and motherboards/cpu’s that can deal with it, then even QSFP+ 40gb pcie cards and trancievers seems not much more than 10gb right now, though I haven’t done my research on parts comparability.

I’m talking new.

NICs - yeah they’ve come down a lot. But 10 GbE switches are still expensive to buy new for a home user.

Ex-datacentre stuff is all well and good but it is normally noisy AF due to fans, etc.

It’s getting there for sure though. Slowly.

But yeah, point being, gig-e is cheap enough now that its a no brainer. If you’re running 100 megabit, you’re really screwing yourself over in 2019 unless you really don’t do local network traffic between local hosts.

edit:

hmm wow. didn’t see those before. cheers for the link, i might need to invest… :smiley:

I haven’t had any heat issues with my sfp+ cards as far as I can tell, as normal caseflow seems to be enough. I don’t even put little 40mm fans on them like I do with my LSI HBA cards. I think they may only really heat up is when you use copper based direct “attach” cables, which I have no reason to bother with.

Incidentally, the switch I linked is passive with no fans, so be careful where you put it. :wink:

Yeah, it looks perfect for my application though. A couple of hosts connected to a NAS.

The NAS will currently out-run 1GbE quite easily (zfs with 2 mirrors), even more so if i add cache to it. Because its currently on 1 gig i’m only not using it as a VM backing store…

edit:
lol. welcome to amazon.com.au tax. Yeah i can just buy from amazon.com instead, but aussie pricing is lolz

Shipping from your original link to Australia is $8. And they wonder why we don’t buy local…

What I’ve also been doing recently is to set the DOCKER_HOST environment variable to ssh://user@SOME_NOT_FEDORA_MACHINE and then assuming you have your ssh key in authorized_keys on the server you can remote control the docker engine as if it where local. The CLI fully works it’s just the engine that’s broken.

I found out that you can do that by accident when trying to make docker work in WSL (witch kinda sorta works the same way, except it exposes a tcp port to localhost… witch you don’t want to do over the network, better to use ssh).

I quite like it even if docker just worked on Fedora31, because I don’t use up my phones bandwidth and notebooks battery as much when I’m not home. Otherwise I probably would prefer to run it locally too.

Curious if there are any pitfalls to forcing cgroups v1 onto fedora >=31. I’ve been using it for a while, did not notice anything breaking. Currently I’m not using that kernel arg anymore.