Homelab Network Upgrades

Hi all,

I am planning on doing some homelab network upgrades. Part of this process will enable future expansion and possibly building an ESXI failover cluster at some point.

I’m looking for some general advise as I plan this process as there are several ways I can go with it.

Currently I have a setup which is as follows:

  • Netgate SG-5100 with 4 port LACP to HP E3800 switch. This provides internet connectivity and IDS/IPS.

  • The E3800 acts as my core/router and as such is the default gateway for all of the VLANs in my house. Has 4 10GBe ports.

  • A single ESXI Host, Ryzen 7 1700, 64gb of ram, 1x 10gb connection for iSCSI and 2x 1GB for DVswitch for guest traffic. The single biggest problem I have right now is that all DNS in the house lives in this box, two instances of PiHole and two domain controllers. If I do not build an HA cluster, I will at least be looking to move those services two be distributed across two boxes.

  • A FreeNAS host Ryzen 3600, 32 GB of ram, SSD pools for VMS, HDD pool for Plex. Single 10GBe port.

  • A backup FreeNAS host with 2x1GBe, acts as a Backup of the Plex Dataset through ZFS Send/Receive and acts as a backup for VSphere via Veeam.

  • My Workstation consumes a 10GBe port.

  • 3 other layer 2 access/POE switches which have 2x1GBe LACP trunks back to the core.

Some of the problems I want to solve are the following:

  • -I would like to create additional bandwidth and redundancy for my ESXI host via additional 10 gig links. I would like to have 2x SCSI uplinks and I would like to move the DVSwitch the hosts live on to 10gig.

  • Similarly, I would like to give my production FreeNAS box a LAGG for redundancy purposes.

  • I am terminating alot of layer 2 traffic on my core which is generally discouraged.

One potential solution is to purchase another HP E3800 (one with POE, as I have a second 8 port 10/100 POE switch in that closet, which has made me run that in conjunction with an injector to power my AP in that area of the house.) I could then stack them together.

That would leave me with 8 10GBe ports. 4 for ESXI, 2 for FreeNAS, 1 for my PC and 1 left over. I could then span those LAGGs (including the ones to my firewall and other switches) across two physically different switches which would give me added redundancy in case of a failure. But If I wanted to add additional hosts to my ESXI environment and create a cluster, I would not have enough ports to go around. I would have to purchase a third switch if I wanted to go down that route. The biggest problem with this is that the stacking cables and modules are selling for as much if not more than the switches themselves on ebay.

The second idea is to then purchase something like a Mikrotik CRS-317 to act as a distro switch. This would get 16 additional 10gb ethernet ports, and I could make a 2-port LACP trunk between it and purchase another HP switch that has gigabit to handle all of my wireless and gigabit layer 2 traffic. This doesn’t have as much redundancy as above, but it also removes layer 2 traffic from my core.

I am leaning towards the second option.

Any thoughts or opinions?