Port aggregation is equivalent to adding lanes to a highway without increasing the speed limit. If you have a server with 2 10GbE ports bonded in LACP, it can service 2 workstations at 10GbE simultaneously but a single workstation would not be able to leverage 20 Gb. As mentioned, SMB multi path circumvents this by splitting traffic into multiple “lanes”. It is an option for you to avoid 25gb connections. It was historically a windows feature but it was recently officially added to samba which is the Linux/BSD SMB implementation (it had been an experimental feature for a while but is now considered stable).
Sorry I should have mentioned it earlier. I primarily work with Macs which don’t support it so I tend to forget that it exists.
One other note, in my experience, the cheaper net gear 10GbE switches have problems with jumbo frames.
Yeah. Paying more for less complexity is often underrated around here with most people building homelab on a tight budget.
NIC-wise, I haven’t needed to deploy any 25Gb infra myself. In general Intel NICs are the safest bet though. AFAIK, there is no copper solution, so it’s going to be fiber with SFP28 transceivers. CAT8 cable is a thing (40Gb) but I have never seen a switch for it. Probably extremely expensive.
Caveat that sfp28 DACs do exist but only for short runs afaik. I assume the workstations are not immediately adjacent to the rack…
I’d put aside some time and read through this before you get it:
Looks pretty normal though.
Are you wanting to use an automation tool to configure this (or I’m thinking in your case, probably just set and forget)? Skimming the cli reference, I don’t see anything about auto-provisioning, netboot or similar. The one advantage to the Mikrotik switch is that RouterOS has an Ansible module and some auto-provisioning capability.
Also, wow this is probably the most affordable SONiC-compatible 10G switch I’ve seen. Can’t wait for these to hit used market.
I don’t know for sure, but since QSFP is essentially 4X SFP+, it might not be possible. I’m not an expert on the particulars of the implementation, but my understanding of QSFP and QSFP28 are that they are essentially 4 interfaces bonded in hardware instead of software. For instance, there are QSFP breakout cables that convert to 4X SFP+. There’s no adapter chip in there, the connection can just be physically divided into 4. This is not the case for SFP28 which is actually a single 25Gb link, similar to SFP+ is 10Gb.
That said, there are multiple 40G standards out there. As mentioned before, CAT8 cables are intended for some sort of 40Gb standard that isn’t QSFP. There’s also infiniband, but I think that’s pretty ancient now.
This actually brings up a point about that switch. It’s unusual to have both SFP28 and QSFP ports on the same switch. If the SFP28s are downlinks and the QSFP is uplink, I’m not sure that a single connection could ever surpass 10Gb since QSFP is essentially 4x 10Gb. It’s possible that the benefit of QSFP over software aggregation is that it can saturate 40G over a single connection, but I’m not sure. I’ve only ever used QSFP as an interconnect between switches.
Good luck using balance-rr mode bonding with anything other than two Linux hosts running a test in iperf … I assumed we were talking real world situations where the clients may be any os and doing real file transfers…
Two thousand pages of commands!!! LOL. Talk about watching paint dry… in its basic form can I not just connect devices and it’ll switch packets for me without diving into this manual in detail?
A switch with 25G links to clients/servers would not make sense to have “slow” 40G uplinks. Most would have 100G uplinks, like MikroTik CCR2216 @oO.o mentioned or the FS N8569-48BC or Aruba JL702A (← warning, shitty website)
Edit: Juniper EX4650 could be something. No idea about prices (and potentially required licenses though)
Thread is a month old but just joined the forum to chime in as I’m also looking for a similar solution for bandwidth but not so many shared users.
I’ve been in post for 20 years and the last 18 months a lot of my income has come from dailies. I really think you should consider proxy editing for your workflow. Then you don’t have to use a ton of capital building a network that can share multiple streams of Red raw. Data in post only gets bigger as time goes on so if you ever want to scale your business it’s a big investment to accomodate more users. I’m guessing the reason you want to work on the camera files directly is so that you can do the offline and online at the same time? If that’s the case it may just be better to invest in fast local storage and share projects across a 10gbe network.