ESXi 6.5 standard switch link aggregation with Netgear Managed switch

I’m new to ESXi.

What is the best/recommended way to do NIC Teaming with standard switch in ESXi with a managed Netgear switch?

I’m aware that LACP is only available in distributed switches, which requires a paid licence that I don’t want to purchase. so I’m looking for the best alternative.

Take a look at this KB https://kb.vmware.com/s/article/1004088

Just as a note, if you only have two physical NICs I would avoid teaming them. You want your management interface on a nic that is not part of LACP to ensure you can still access the host if the LACP fails for any reason.

You can also do non LACP based teaming where you get redundant uplinks without load balancing useful if you only want redundancy as opposed to performance increases.

  1. Route based on the originating port ID: Choose an uplink based on the virtual port where the traffic entered the virtual switch.
  2. Route based on an IP hash: Choose an uplink based on a hash of the source and destination IP addresses of each packet. For non-IP packets, whatever is at those offsets is used to compute the hash.
  3. Route based on a source MAC hash: Choose an uplink based on a hash of the source Ethernet.

As I understood, 1 is load balancing based on the virtual machine sending or receiving the traffic.
2 is same thing but based on the IP address.
3 is based on mac address, which sounds to me a lot like 2 since every mac address has it’s own IP address.

What I’m after is performance and redundency, is this possible with the options I have?

Which one offers better performance?

I would like load balancing as close to LACP as possible. It seems to me either IP hash or MAC hash.

Also, what kind of configuration do I need to make on the physical switch? static LAG?

Thanks

I believe you can just add multiple physical ports to the vmnic and it will load balance across your VMs for you automatically. It actually throws you a warning if you’re only using one physical interface.

Nope, just nope.

Physical ports are the same as vmnics, just different context.

If you mean add physical ports to a port group in a vswitch, without further configuration, that will just bridge the nics together, connecting them to the same physical switch will not make them load balance automatically, it will create a loop.

Yeah, sorry, mixing up the VMWare terminology.

You can add multiple uplinks to a vswitch and it can load balance or failover or whatever:

Yes I know that. My question is which type of load balancing is the best for performance?

What is the difference between IP hash and MAC hash?

And does it require configuring static LAG on the managed switch?

Sorry about that. I thought you were having trouble getting to that point.

IP hash = both source and destination IP’s are hashed
MAC has = only source MAC is hashed
Port ID = per virtual interface

I believe IP hash should be the most granular since it evaluates source and destination.


edit

If the physical switch is using link aggregation, Route based on IP hash load balancing must be used.

So it looks like you can configure non-LACP aggregation on your switch and configure IP hashing in ESXi. That should give you the most optimized load balancing.

@mzarrugh

Hey, apologies for yesterday. I was multitasking and trying to be helpful, but not reading thoroughly.

Hopefully, I can redeem myself a bit.

I did manage to answer this question:

I used to have an ESXi 6 host with 6 aggregated gigabit ports across. From what I remember, I was using IP hashing, and I believe it worked without configuring load balancing on the switch. That said, you wouldn’t necessarily see the full benefit unless you configured load balancing on the switch as well.

I would test it for you, but I have since upgraded to a single 10G connection for my VM traffic. Since you’re asking here, I’m assuming that you can’t test this yourself for fear of losing connection for your VM’s. Sorry I can’t give you a more solid answer here.

IP hashing on both ESXi and switch should give you the most granular load balancing. I suppose it’s possible for the load balancing to start eating into your CPU if you’ve got a ton of small packets going to a ton of different places, but I can’t think of any other disadvantages really. Maybe someone else can.

It should also be able to handle a link going down, although you should test it to be sure.

Have you looked at oVirt? I am thinking of migrating off of vSphere to oVirt if my testing goes well. Always nice to get away from the proprietary stuff, even if it is built on a Linux distro.

1 Like

Thanks dude that’s really helpful.

Actually I’m used to running proxmox, but running pfsense on proxmox caused instability in situations where there’s about 20 network interfaces. So I’m trying to transition to ESXi since it’s supposedly more recommended by pfsense folks.

Also, do you know if ESXi has hardware restrictions on the free licence? I guess it’s 8 cores and 32gb ram but I’m not sure. And what happens if I have more than these specs, does it refuse to boot?

I’m not sure, but here’s a link to the cheapest legit vSphere license that they offer. They kind of bury it, but it’s what I have. I’m not sure what all the limitations of the free license are.

https://store.vmware.com/store/vmware/en_US/cat/categoryID.66192900

I believe that if you run it on a system with more than 8 cores, it will run, but only 8 cores will be used by your VMs. Not 100% sure though.

Yeah, oVirt might work, but I believe it relies on the same underlying virtualization technologies as proxmox, so maybe not.