Dual NIC setups - SuperMicro H12 series

I would sit down with management get budget and goals for file access speed. Then build storage to spec, then focus on end user network connection speed then blance server speed to that.

Honestly I cant imagine 45 drives densest have a purchase person to guide you thought the process. You can run their suggestions through us if you want a 2nd opinion. But the selection process is hard without a budget and goals.

They either make do with 10Gbps or shel out serious money for switches and network cards ā€¦

This:

will give you 20 40Gbit ports (4500EUR) 4x10GB and 4x100GB for uplinks

So you would go with Mellanox Connectx-3 cards on your clients (60USD on ebay for the 40Gbit model) and ConnectX-5 on the NAS/server (1000EUR a pop for a dual nic model), then use direct attached cables if you can, or shell out even more money for the proper transceivers/switches.

At that point we would be talking very serious money, so Iā€™d suggest allocating some for a ā€˜professionalā€™ to really look into your use case and make the most out of your budget ā€¦

A cooler and even costlier alternative would be to move to rack-mounted graphic workstation solutions like the Dell Precision 7920 or the Lenovo Thinkstation rack mounted, if they still make it ā€¦ but we would be talking 10K per workstation plus the server cost ā€¦

The editing / grading workstations are $16k per.

I just sat down with myself so weā€™re golden! :wink:

Niceā€¦

1 Like

Ok so whats your end user speed goals? How big of files do you deal with / whats the time savings going up to 10gbe on the end devices / how hard is it to add that to your current kit?

1.6GBps

The files are RED RAW files and are typically ~2.1GB per file. Youā€™d have dozens of these files in one project that youā€™d access.

HUGE. The alternative is moving the data across to the local workstation each time you want to edit a project. That can easily be 1TB or more of data. Not fun.

Well, since weā€™re just at the specā€™ing stage, Iā€™d say easy.

1 Like

So 10gbe at the workstation

Thats solves that part of the equation.

How many users (I think you said 8?)

How do you figure? 10gbe / 8 = 1.25GBps less error correction. You canā€™t get 1.6GBps on a 10gbpe NIC unless Iā€™m missing something.

We are talking at the desktop not the NAS, we didnt pick the NIC for the NAS yet

image

So we arent going to go up to 25gbe on the workstation for that 2.8 its not cost effective. So if you want 1.6GBps at the editing station then we are aiming for 10gbe there.

Why not? Editing a data stream that canā€™t keep up really sucks. You get stuttering frames. Would be enough to jump out the windowā€¦ although weā€™re on the ground floor.

Wouldnt you just have them move the project files to their desktop for local work? Are you trying to only work on the files off the nas?

Correct - work off the NAS. We could have two or more people working on one video project. And migrating one project to the local workstation would be a HUGE drain on time - there are times when we only need to tweak one thing then re-render the project for a client. Moving all that data would take forever.

So budget is going to get quite crazy as for that type of file editing you are going to 100% need SSDs if 8 people are all accessing files. I am deff not qualified to give you a good idea of what type of perf you would see. @oO.o might be able to give you a better idea on what type of stuff you would want / need for live video editing off a NAS.

TLDR: 8 workstations editing RED Raw files and want the workload to be 100% off the NAS for the file access. Target is 1.6GBps at the workstation but idk if he could get away with less. This is way higher of tier NAS then I would be comfortable recommending and I dont have enough understanding of Read/write usage when editing video.

2 Likes

@Bumperdoo are the editors using Macs or PCs? Iā€™d guess PC, but I see Pegasus Thunderbolt storage most commonly used with Macs.

If possible, I would go with 25Gb/s on the hosts. Even at the same throughput, network storage is going to feel more sluggish to the end user. Your editors are all going to be annoyed if their workflow is negatively affected by the ā€œstorage upgrade.ā€

If youā€™re feeling brave, you can try out the new Mikrotik hardware if it comes out in time:

Please be aware that there has been at least one instance where the initial hardware revision on a high performance unit like this was borked and it took them at least a year to release a rev2 (iirc).

In general, on the server-end, LACP LAG will be fine as @MadMatt described, although I think typically with 25gb switches, there are 100gb uplinks. Also, note that jumbo frames will result in a small throughput improvement at 10Gbe and higher.

1 Like

You think you can get away with SATA SSDs or you think its nvme at that usage?

PC. We used to be Mac but migrated years ago - performance on the WIN side is fabulous. Finding TB3 headers on Win MOBOs has always been interesting for sure, and occasionally some hoops to jump through.

Ok.

I had never heard of this brand till today - solid?

They are ok, deff targeted at value. Cheaper then main stream stuff to the point you could probably have a 2nd one spare on hand

Roughly equivalent to Ubiquiti quality/price-wise. If you can afford something more enterprisey, then do that instead.

I was referring to Mikrotik here. Some of their first round of 40g/10g copper switches would drop banks of ports randomly. Thereā€™s a very long and angry forum thread about it. Ubiquiti has had similar issues with certain hardware runs. Itā€™s the risk you take when buying something that cheap before it has a chance to mature.

1 Like

Get what you pay for.

Question - why would you aggregate two NICs then if you donā€™t get a speed boost? Is it just fail-over?

Maybe SMB multipath which is more of a client side thing could work here. Advantages would be that it requires very little configuration. Not even LACP just two interfaces with different ip addresses.

Whatā€™s going on here then?