Network Bonding - Ubuntu 20.04 Server

Good day all.

As per the topic title I am looking for help and advice with regardings to Network Bonding within Ubuntu 20.04 Server.

I am very much a linux n00b so please take it easy on me. :roll_eyes:

Due to having a large household I decided to build a media server, using Ubuntu and Plex Media Server.

My Whitebox Server Specs

  • Supermicro X8DTL
  • 2x Intel Xeon X5570.
  • 96GB ECC Memory.
  • Intel Pro1000 Quad Nic.
  • Raid Controller (can’t remember make/model).
  • nvidia Quadro P600.
  • 4x 4TB WD Red (In hardware raid).

This system has been running fine for a number years, as we only had a view devices on the network streaming at any given time.

There are now multiple devices on the network that can and do end up streaming simultaneously, the server has a mix of x264, x265, 1080p and 4k video content.

Within the last few months I have also added Lancache into the mix

Since then I have noticed plex is pausing and buffering, and complaints from the wife and kids.

Lancache seems to be running ok, bar from the transfer rates from the server are not stable, and are very often way below the expected speeds.

On the intel quad nic, I currently I only have one active gig network.

As such I am bottlenecking the single network connection to and from the server.

As I have the Intel Pro 1000 Quad Nic, and am only using the single port, I want to use all four port and bond them together to incress the server’s network bandwidth.

I know that the network settings for Ubuntu Server 20.04 are controled via /etc/netplan/00-installer-config.yaml

But I do not know how to adjust it for bonding and what bonding mode would be best suited for my needs.

Here is my current /etc/netplan/00-installer-config.yaml config file

# This is the network config written by 'subiquity'
network:
  ethernets:
    enp3s0f0:
      addresses:
      - 192.168.1.204/24
      gateway4: 192.168.1.254
      nameservers:
        addresses:
        - 192.168.1.254
    enp3s0f1:
      addresses:
      - 192.168.1.205/24
      gateway4: 192.168.1.254
      nameservers:
        addresses:
        - 192.168.1.254
    enp4s0f0:
      addresses:
      - 192.168.1.206/24
      gateway4: 192.168.1.254
      nameservers:
        addresses:
        - 192.168.1.254
    enp4s0f1:
      addresses:
      - 192.168.1.207/24
      gateway4: 192.168.1.254
      nameservers:
        addresses:
        - 192.168.1.254
  version: 2

After searching around the forum and google, below is my proposed new configuration file.

network:
  bonds:
    bond0:
      interfaces:
      - enp3s0f0
      - enp3s0f1
      - enp4s0f0
      - enp4s0f1
      parameters:
        mode: balance-rr
  ethernets:
    enp3s0f0: {}
    enp3s0f1: {}
    enp4s0f0: {}
    enp4s0f1: {}
  version: 2
  bridges:
    br0:
      addresses:
        - 192.168.1.204/24
        - 192.168.1.205/24
        - 192.168.1.206/24
        - 192.168.1.207/24
      dhcp4: false
      gateway4: 192.168.1.254
      nameservers:
        addresses:
          - 192.168.1.254
        search: []
      interfaces:
        - bond0

I must admit at this point in time I do not fully understand the different bond modes.

Can you please advise?

Thanks for your time.

Best Regards.

I would say mode 4 (802.3ad) would suit your requirements best. But I’m no expert. This mode would aggregate all 4 interfaces, so they act as one, increasing your throughput.

I believe your router/switch would have to support 802.3ad for this to work though.

Your current config, mode 1 (balance-rr) will round-robin the traffic. i.e. Send/receive packets on one interface after the other. Good for load balancing, not as good for overall throughput.

1 Like

BigBlueHouse: Thank you for your reply.

I completely forgot to to list my network switch :disappointed:.
The network switch in question is a Netgear ProSafe Plus JGS524PE.
I know that the JGS524PE supports Link Aggregation Groups (LAG) but I don’t know if it supports 802.3ad, time for me to go and read the spec sheet for it.

Thanks again.

1 Like

802.3ad is LACP (Link aggregation control protocol). I’m pretty sure your switch supports it.
https://kb.netgear.com/000053559/How-do-I-set-up-an-LACP-LAG-between-a-Smart-Managed-Plus-switch-and-a-QNAP-NAS

You could also just do mode 5 or 6 on your Linux host, which does the same thing, but the switch does not need to be configured for LAG in anyway (the ports just need to be identical, either same access vlan or trunk config)
https://www.ibm.com/support/knowledgecenter/en/linuxonibm/com.ibm.linux.z.l0wlcb00/l0wlcb00_bondingmodes.html

xradeon: Thank you for your reply and links.

I have been looking over the datasheets and user guides for the JGS524PE.
It clearly states it supports Link Aggregation Groups (LAG), I did find one document (quite old) that stated it does not support 802.3ad LACP (Link aggregation control protocol).
I have submitted a support ticket to netgear asking if the current firmware supports 802.3ad (LACP)

I will however look into mode 5 or 6 has you stated for the time being.

** edit
I’ve just looked over the datasheet again:

After re-reading the datasheet it seems that.
GS116Ev2, JGS516PE, JGS524Ev2, and JGS524PE support static manual LAGs only.
GS750E supports static manual LAGs and LACP.

Thanks again