Dual NIC setups - SuperMicro H12 series

Hey Gang,

Dual 10Gb NICs on H12 series MOBOs.

Is there an advantage to setting up and using both NICs on a motherboard? Should they be linked so they appear as one NIC to the OS? Is there a performance improvement?

Thanks!

Derek

I mean you save pcie expansion slots thats the advantage. The disadvantage is if they go bad and you dont have a spare slot then your SOL.

They should appear as two nics on the OS

There is not performance improvement vs AIC of the same chipset that I am aware of / that would matter.

Sorry - didn’t mean is the onboard NIC linking better than AIC, just if linking is effective and results in higher bandwidth?

Looking at moving data faster in a video editing environment and would guess link aggregation would result in 20Gbs more or less?

You kind of could do that with more then one connection, but you wouldnt get like 20gbps for 1 file transfer

What switch are you using, does it even support it? What is the hardware its reading / writing to?

I would probably just get a faster nic if you need the extra speed and you have the storage that can even saturate it

Just in the planning stages atm so haven’t chosen the switch - doing research.

We are running Thunderbolt 3 based storage solutions that peak out around 1660MBps sustained. My thinking is if we could share at those rates across a link aggregated network we’d be golden. 20Gbps / 8 = 2.5GBps less error checking; we’d be well north of 1660MBps. Doable?

Multiple read/write operations are a bit different then 1. Whats your storage solution you are planning to be behind these nics? What File system? Which OS? How many drives? Which layout (RaidZ1 etc…)?

Understood re: read / write. We are using Promise Pegasus 32 R8 hardware in RAID 5. Here are the benchmarks…

Also looking to migrate over to a 45drives solution.

1 Like

In your use case assuming your new drives can handle the demand, you probably would be fine with the dual 10gb nics. Your storage solution will deff be the harder thing. Remember that box you have listed is 1M Seq Read for 1 QD I am going to assume in that test since TB not designed to be accessed by more then 1 user.

We’ve been using the Promise systems for a decade and very familiar with their performance for sure. And correct, they are a single-machine setup.

If we could get similar speeds but NAS-based, that would be fabulous as we add workstations to the network that all need access to the store.

Interesting read over on 45drives… How to Achieve 20Gb and 30Gb Bandwidth through Network Bonding

1 Like

The machine that holds the data can leverage LAG on an interface to be able to multiplex multiple connections.
It will never be able go fo over 10Gbit for a single client, but in case of multiple clients pulling data from this servers aggregated throughput of more than 10Gbit can be achieved, up to the theoretical 20Gbit limit.
(Link aggregation - Wikipedia)

You will need:

On the ‘server’

  • A dual 10Gbit network interface (or more, same goes for quad)
  • Storage that can provide that kind of speed (e.g: a bunch of NVMEs in a raid config that can go sustain the throughput, fairly easy if it is sequential, less easy if it is randomly accessed)
  • an OS that supports bonding (Linux) or teaming (Windows) or Link Aggegartion (Mac) the two nics
  • a sharing protocol that doesn’t suck so NFS or SMB3 (windows 10 onwards clients)

On the network

  • A switch that can support 10Gbit links and LAG, either static or 802.3ad … any managed switch should be able to support that

On the clients

  • at least one 10Gbit network interface
  • local storage that can receive the 1200MB/s expected throughput

To stay on the safe side, I would go with 25Gbps or 40Gbps (still bonded) interfaces on the server and on the switch (mikrotiks with SFP28 or QSFP are very reasonable in price) that would give you even more room to grow your bandwidth on the server side of the equation, but you would probably hit the limits of either the storage and/or the network sharing protocol well before hitting 4000MB/s uf transfer throughput …

1 Like

That was my point, I am aware how LAG works

1 Like

I was reinforcing your point :slight_smile:

1 Like

Why not? If the clients are also using dual 10Gbs and those are bonded / teamed…

Because of the way Link aggregation work.

The switch has to manage packet flows between server and client, and decide which interface participating on the lag it needs to send data over. The algorithm used to do that is based on the client’s MAC address, so data coming from/to a single client will at most use in full one of the links, never spill over the others until said link fails …
The only way you can go even faster is by using 25Gbps/40Gbps but switches with more that a coule of these ports become pretty expensive (and noisy and power hungry) pretty fast …

Think of it like rail road tracks you can switch between them but you can only run the car on one track at a time. Others could use the other track while you are using one but you can never exceed the 1 track.

This is also why I was recommending faster nics if you need more speed not using LAG. Lag just allows more people to connect / redundancy depending on config.

Bummer… ok… so how are shops that need to share large video files managing?

I wrongly assumed I could link the NICs and get 20Gbps and be golden… you two have ruined me day! LOL

What vendors are doing 25Gbps or 40Gbps switching? And NICs at those speeds? Worth exploring…

I mean you need that on both sides, what is your end device connection speed?
How often are people pulling from the file share? Like I assume they just pull the project off the nas to their local box to do work then transfer back? Or what exactly is the work flow?

Total budget will ultimately determine what you get. They make 100gb NICs but its not cheap

The 45drives can do up to 3GB/s in certain configurations. They have NIC options from 1Gbps all the way up to 40Gbps… Enterprise Network Attached Storage (NAS)