Docker network speed

Hi guys!

I’v been measuring the speed between 2 containers with iperf3(1 container on unraid, and 1 container on a synology ds1515+ nas).

Both the unraid and the synology nas have 4x 1gigabit nics bonded with 802.3ad resulting in 4000mbit for each bond connected to a smart switch correctly set up for the Link Aggregation.

Running ethtool on bond0 on both unraid and the synology nas gives 4000mbits.

Running ethtool on the docker hosts, on each containers interface results in 10 000mbits, looks like it won’t be a bottle neck.

looking at “dmesg | grep eth0” inside the container reveals that the NIC is only 1000 mbits.

When I run iperf3 on the containers I only get around 944mbit/s.

Is there some way to control what network speed the container is given on the nic ?

Regards

default docker containers uses software bridges between the container and the real network device. Maybe the overhead added by this software layer is degrading the performance?

You can try to use the macvlan driver so your network can see the container as a physical hardware and communicate direct with it.

Okay so I tried your suggestion.

docker network create -d macvlan --subnet=192.168.1.0/24 --gateway 192.168.1.1 -o parent=bond0 br0

My bond0.
ethtool bond0
Settings for bond0:
Supported ports: [ ]
Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 4000Mb/s
Duplex: Full
Port: Other
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Link detected: yes

My newly created macvlan:
ethtool br0
Settings for br0:
Link detected: yes

And inside the container:
dmesg | grep eth0
e1000e 0000:00:19.0 eth0: (PCI Express:2.5GT/s:Width x1) 08:62:66:80:97:aa
e1000e 0000:00:19.0 eth0: Intel® PRO/1000 Network Connection
e1000e 0000:00:19.0 eth0: MAC: 11, PHY: 12, PBA No: FFFFFF-0FF
e1000e 0000:00:19.0 eth0: removed PHC
e1000e 0000:00:19.0 eth0: (PCI Express:2.5GT/s:Width x1) 08:62:66:80:97:aa
e1000e 0000:00:19.0 eth0: Intel® PRO/1000 Network Connection
e1000e 0000:00:19.0 eth0: MAC: 11, PHY: 12, PBA No: FFFFFF-0FF
bond0: Enslaving eth0 as a backup interface with a down link
device eth0 entered promiscuous mode
e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None

It still only gives me my 1000mbit. Strange. Is it correct of me creating the macvlan with “-o parent=bond0”? I mean it seems logical since that is where the 4000mbit is aggregated.

I also notice that the NIC in the container is named e1001 which is also the real NICs name on the physical host.

Another thought, gateway is my actual router, and that one is only 1000mbit, could that affect? the 2 machines containing the dockers are connected into the same smart switch with the link aggregation, so I really don’t need to involve my physical router for this maybe…

This is how link aggregation works. You can’t get 4gbps between two hosts it will always only be as fast as a single link. You can get 4gbps total between multiple hosts but any single host only has the speed of a single link.

2 Likes

so if I understand you correctly, I’d have to use 4 containers at the same time connecting for testing my full 4gbps bandwidth? I suppose I get the logic in that.

However, is there any way for the container to actually get a NIC with more than 1gbit? I’ve looked for settings for that. And since it is software based, it just seems logical to be able to at least toggle between the usual standards 10/100/1000/10 000.

Yep, pretty much.

If you’re bridging to a physical nic then you’re limited to whatever that is. For a local container to container network then I don’t see why you couldn’t have an emulated 10gb nic assuming that exists, but I don’t really know about that.

1 Like