Return to

NIC Teaming, aggregation, bonding, load balancing, multichannel - please clear out those terms!



Heres what i know:

  1. Samba multichanneling
    LEVEL - samba protocol
    SPEED IMPROVEMENT : ALL (1 to 1, 1 to many)
    REQUIRES: samba 3 or 4 on both connected machines, multicore processor, and NICs which
    This kind of aggregation sits purely on protocol (samba) level. This is what your video was about

  2. Bonding (linux) or NIC Teaming by Windows (Server)
    LEVEL - 3rd OSI layer (?)
    SPEED IMPROVEMENT: 1 (server) to many (clients) only
    REQUIRES: linux, BSD or windows server and switch with LACP

  3. Intel/Broadcom NIC Teaming in Load Balancing mode
    Level - NIC driver
    REQUIRES: Intel drivers or Broadcom Drivers and can link only NICs from one producer
    But i dont know does it require switch with LACP and dont know is it only load balance traffic or it also improves 1 to 1 connection. There are no test about it, just tutorials.

Can you make some video or article about this stuff, explain requirements? Because i think many of us (not only server admins with traffic problems) thinking about improving our LAN/samba/iscsi bandwidths from our home NASes and SANs without spending money on SFP or 10Gb RJ45 equipment.


I’m sure someone will give you a more detailed response, but here’s a rough idea.

Still experimental unless they released it very recently in 4.7. Even if it is released soon, I would give it a point release or two to mature before using it.

There are several flavors of bonding. 802.3ad is generally the best performance option but requires LACP config on the switch. It is still more like adding more lanes to a highway than increasing the speed limit. You won’t see higher speeds on single SMB transfers, for instance. Bonding can occur on layer 2 and/or layer 3. How to config will be different depending on your OS and switch.

NIC Teaming is kind of the new and improved bonding. You have some more flexibility with combining different types of interfaces. I tried it on CentOS a while back and ran into some issues. I stuck with bonding because it works fine. Someone else can probably tell you more about it. I don’t believe it can give you a fundamental performance improvement over an 802.3ad bond, but I could be wrong.


My general advice is to set up an 802.3ad bond on your NAS if your switch supports it, but don’t bother on your workstations/desktops. Also, I recommend sticking with NAS protocols (smb, nfs, etc) over iscsi until you have 10GbE in place on a dedicated lan/vlan with jumbo frames.

If you do want to go with 10GbE in the future, get used SFP+ NICs on ebay, buy Macroreer transceivers/DACs on Amazon, and look at Ubiquiti switches. You can set it up for under $1k.


To address your title:

Aggregation is generally synonymous with bonding, although it could be used as an abstract term for either bonding or teaming.

Bonding and teaming I addressed above.

Load balancing is one thing that bonding and teaming can do. Failover is another thing they can do.

Multichannel in this context is an SMB capability in Windows. It is being actively developed in Samba 4.


Thanks for response. One thing bothers me about NIC Teaming on CentOS - which company NIC did you use and how did you get intel/broadcom proprietary drivers for linux? Wget from official site? Or some kind of windows drivers hacked version for linux? Because in Windows you have to download NIC producent drivers in order to get special panel with those funcions. They wont run on generic Microsoft drivers.


In CentOS, you can configure teaming from the installer or later through network manager. I was using onboard Intel NICs. I have no idea to what extent, if any, it’s hardware accelerated or if it requires 3rd party drivers in certain cases. I don’t think it does. You can team different vendor NICs together. It’s pretty flexible, and seemed to be agnostic to the hardware.

That said, I have installed 3rd party NIC drivers before. You just have to navigate through the vendor’s site until you find the instructions and download link. It was an annoying process for me.

My Windows knowledge is very limited, but I believe that certain features (possibly including teaming) might be limited to Server or Pro licenses. I have heard that smb multichannel can be picky about hardware and may require specific drivers/NIC config, so maybe teaming/multichannel are used interchangeably in Windows. I’m not sure.


Just to add to this. Bonding on layer 2 uses the source and destination mac addresses to generate a hash that determines which link traffic will go down, so all traffic from the same source and destination always goes through the same link (this is why link aggregation is 1 to many only). What this means in practicle terms is that layer 2 bonding is fail over only, because the source and destination is always the host and the first switch, or first device connected, so it’s the same for all traffic.

For load balancing, where you can see a performance difference you want to use layer 3 bonding which uses the source and destination IP address . As this will be unique for each source and destination host pair you can will get a total bandwidth increase proportional to the number of host pairs you have on the network (links permitting).

This usually needs to be configured on both the host and switch, on Linux the default action is to use layer 2 whereas Windows seems to figure it out automatically and will use layer 3 is the network supports it.

Also on linux the layer 3 mode is actually layer 3 + 4 so that means that it uses a hash of the ip and port, so you can actually have improved performance between two host so long as you are using different services (smb and http for example).


As far as I know link aggregation, bonding and teaming are all interchangeable terms for the same thing. On Linux the bonding module is independent of the nic drivers and con be configured with any combination of NICs. Whereas on Windows it is part of the driver.


Well this is a thing i want someone clarify. Because i suspect that drivers have a big deal about what kind of bonding you use. Bonding two different producers NICs and bonding NICs on driver level may be different things. But dont know that for sure, i would gladly see some tests.

Exaclty, load balancing as name says is optimal for one-to-many connections and dont think does any improvement for one-to-one as for example multichannel does

Yea, LACP functionality which most decent managable switches has.

Anyway, dont you agree that this is an interesting topic for L1T guys to make a video?


Not really, link aggregation is kind of a group term for a bunch of different standards, every bonding method supports the proper standards and others have their own non standard types as well (linux’s round robin mode for example). There is not difference to performance or functionality between the linux bonding module and the windows intel drivers, they’re all doing the same thing. The main difference is that on linux you can bond anything whereas on windows you’re limited to what the drivers let you do, because windows does not have any built in bonding functionality.

LACP does not configure which hash method is used, this has to be configured independently of the link aggregation mode. The hash mode is important as it determains the kind of functionality a teamed connection can have. Basically:
Layer 2 (MAC address hashing) = Fail over only
Layer 3 (IP address hashing) = Fail over and load balancing with a maximum bandwidth of one link per host pair
Layer 3 + 4 (IP address and port hashing) = Fail over and load balancing with a maximum bandwidth of one link per stream.

I don’t know much about SMB multi channel, I know it works with separate NICs, as in each has their own IP address. I also assume it works with link aggregation. Not sure how well samba and SMB work together with multichannel.

Sure, there are some good videos on it already, but there is a lot of misinformation and confusion out there. Seems to be a difficult thing for people to get their head around and a difficult thing to explain properly.


The main problem is that those informations are speparate from each other. I haven seen any material gathering together them, comparing their functionalities, requirements and real-world applications. This kind of video/tutoria/troubleshoot would be very educational for anyone interested networks, either he is small company admin or even wants to just optimise their bandwidth.

Ive seen a guy whos idea was to leave only 120GB ssd for windows and basic programs. And throw all his stuff - movies, programs like Photoshop and most importantly games on iSCSI device from NAS placed somewhere on the attic. Crazy? Well, maybe but it begun a discussion about all stuff i mentioned in the title.


Well link aggregation and SMB multichannel are completely different things, they don’t really have much in common other than that they utilise multiple NICs. I don’t see much need to compare the two and I think it just confuses things.

Well I wouldn’t use ISCSI for movies or other kinds of files as it makes more sense to share them with SMB so they can be accessed by multiple users and devices, but games and programs, that works fine. I do that myself:


Fantastic article! Thanks! Ive already linked it to a guy who asked about it.

So you sayin there is no appliance for things mentioned in title in such situation? Also, why did you use Ubuntu with ZFS? Why didnt you go with something like FreeNAS or OpenIndiana+Napp IT which are more “natural environment” for ZFS?


I already had the ubuntu server. ZFS on linux seems to be just as good as the freeBSD implementation, certainly for a handful of disks anyway.

Yeah, link aggregation, teaming, bonding, I’m sure there are other names (sometimes it’s incorrectly called trunking) they’re all the same thing. There’s a bunch of different standards which achieve what you would call link aggregation and pretty much all switches and software support those standards. So you can configure bonding on linux with a bunch of random NICs and connect it to a windows machine with intel NICs using the intel driver teaming and they’re going to work just fine together because they’re speaking the same language. There’s no performance difference between the implementations as far as I know because all it is is a method dividing up a bunch of network links between traffic while making sure to keep each stream on the same link.

SMB multichannel allows an SMB server to send data in parallel over multiple network links. It doesn’t compete with link aggregation in the sense that they’re not mutually exclusive, I would assume that SMB multichannel works with link aggregation but it doesn’t require it.


Uhm, but in previous post i was asking about bonding/teaming/multichanneling appliance in the context of your topic. Now, after reading it, im curious about most optimal bonding/teaming technologies applications in runnig programs or even games from remote sources - nature of I/O operations (one copy/stream or many parallel transfers?) games and programs performing, which technology is best bang of the buck, which is most optimal for performance (offcourse we ignore SFP and 10Gb RJ45 technologies as they are obvious choices) and other things arround it.


Right. Well if you’re using ISCSI you can use multipath IO, not sure if that uses link aggregation or just multiple NICs like SMB multichannel. I have no idea how well that works. For ISCSI where performance is important then 10GB is really the only way to go because gigabit Ethernet has very limited IOPS of around 100 or so, whereas 10GB (if i remember correctly) has around 1 million.

Link aggregation is not something that will improve performance between two hosts, only between a server and multiple hosts. SMB multichannel and ISCSI multipath IO can be used (potentially) to increase performance between two hosts but they’re also designed for servers with multiple clients. The best way to improve network performance between two machines is just to make a point to point 10GB link, it’s certainly the easiest way and it’s much cheaper if you don’t already have a managed switch that you would need for the other technologies.


Thanks VERY much for all responses. Even though i knew some of those things, some of them wasnt clear to me and thus ive learnt A LOT!
Maybe i even use some of this knowledge as im choosing now hardware for ZFS NAS solution (FreeBSD?, Nas4Free?) for company i work. We have to replace old Qnap 439 with something more serious but still cheap ( P4308XXMHEN + S5500BC + Xeon E5645 or HP dl180 G6, or maybe something else)? Anyway its not place to discuss that.

Thanks again, and I cross my fingers that guys from L1T take this topic on the plate.