Change my mind... 10Gbe direct connections to NAS?

Hey guys and gals.
I’ve finally got a few good reasons to brush up on my woeful tech knowledge, and participate in this community a little!

[
A little about me before I launch in - skip down to my question if not interested! I’m a professional composer first, who dabbles in tech (writing a bit of software for immersive audio… and get called on by folk like google labs and sony to creatively interrogate some of their tech ideas that relate to sound/music/composition.

I’m a partner in a small studio business - which is quickly acting more and more like a small facility, and needs some tech love to help with that.
Yes - we run lots of macs here. In my little studio right now I’m surrounded by 4 of them.
Yes, I’ll be building a Zen3 TR workstation next year from the looks of things.
]

So my question to get the ball rolling is this: For the last 5 years I’ve run a small NAS over GBE to 4 workstations, serving project files. Each computer is connetected directly to the NAS via its own NIC.
I’m now looking to build a very different style of NAS (don’t judge - our old one is a synology diskstation) - and I’m looking at perhaps using 4x10GBE nics in the server and wiring straight to the workstations again. It is doubtful we will have many more then 4 workstations in this space - we just don’t have the room here!

It seems the majority of folk would install a switch. I’m unsure as to what the benefit is (other than another device to maintain) for us just serving project files back to our workstations?

I’m more inclined to potentially install a small switch just in my room so my secondary machines can share my connection to the server (though they rarely need it, its happening almost often enough to consider it).

Complexity isn’t great for us. In the past, I’ve travelled a lot for work, and am the sole person who looks after all tech here - so I’m eager to keep things simple (and stuff that can be documented in our little studio wiki to help very non-tech people help themselves as much as possible)

Am I making any sense - or am I committing some sort of technical blasphemy?

Cheers all. Brendan.

You can run direct to each workstation. Many people do that at home.

One advantage of a managed switch potentially is bonded connections.

3 Likes

First question - do you have network performance problems with 1g ethernet at the moment?

  1. you could have a single 10g card in the server and a small switch with a 10g port or two and the rest 1g back to the work stations - IF - bandwidth is not an issue currently.

  2. If you need 4x 10g then you are going to need a substantial investment in disks (6-8 Good quality HDD or some SSD’s to feed 4x 10g of data - besides the pc hardware to go with it) - i’m guessing this will be substantial overkill.

  3. Could you define what you requirements are in more detail so we have a better target to aim at.

10g kit is not cheap so if you are trying to do this on a budget then that might limit your options

1 Like

So you have 4 direct ethernet connections to the NAS right now. Do the workstations have internet access? If so, how are they connected?

If you are going to go full 10Gbe then you’ll need to put 10Gbe NICs in all the workstations as well, otherwise there’s no sense in putting a 4 port 10Gbe NIC in the server. You’ll still only have a 1Gbe link to the workstations without the upgrade on that end, assuming they have standard 1Gbe NICs. I don’t know what model mac’s you have and I’m not sure if any of them came with 10gig networking.

If 1gig is still fast enough, I think the easiest thing to do and the simplest is to put a single port 10Gbe NIC (or dual if you get a good deal) in the NAS. Then get an unmanaged 10 port switch that has a couple of RJ45 10Gbe uplink ports. Unmanaged means simple plug and play networking. Plug the NAS in one of the 10gig ports, plug the workstations in any of the others and plug a line in from your router. That way you get internet access and access to the NAS with a simple wired connection on each workstation and you’ll have access to the NAS over wifi as well.

There are a lot of things to consider to get consistent data transfer speeds above 1gig if that’s what you need. Everything will have to have upgraded NICs, the NAS will not only need fast HDDs or SSDS but the storage array will also have to be configured properly. You’ll need to choose between transfer speed, capacity, redundancy, or a happy medium between all 3.

1 Like

How many and what kind of drives?

I work contract IT in very similar places.

Nothing wrong with Synology, especially if someone else needs to perform basic admin tasks on it.

Only issue with it is that it obviously doesn’t scale, but otherwise you remove a lot of complexity from the system. That’s how these things work (an option for you if money is no object):

In any case, all-SSDs or a lot of HDDs in RAID10 is a must if you want to saturate multiple 10GbE at once.

1 Like

Even a hard drive can saturate a 1G connection, but I’ve only come close to saturating a 10G connection. When actually doing activities on the NAS, it make a world of difference. 1G can go to hell. Had my own mostly 10G network for 3 years and I’ll never go back.

Most sfp+ 10G gear is very affordable when you get parts from eBay.

$30 per mellanox NIC + $10 per finistar transceiver. OM3/4 fiber is going to be the variable cost.

If it’s an SSD based NAS, consider the 40g NIC’s that are flooding the market and are nearly the same cost as above. The transceivers are a bit more, and there’s no good options for a switches yet.

Another bonus of fiber is the electrical isolation it provides.

2 Likes

TBH direct connects are doable, but a pain in the ass. Do you really want to be a network administrator and configure things and problem solve? My answer was hell no, and I just got the mikrotik 4port sfp+ switch.

2 Likes

Sonnet does have a Thunderbot SFP+ solution, but I had terrible luck with them. The TB board on 4 out of 5 of them died within 2 years and it requires a driver from Sonnet which I suspect was causing some system instability. Highly recommend sticking with 10GBaseT for Macs.

Afaik, no 40g solutions are supported by macOS, although I could be wrong. Probably not the cheap ones though.

3 Likes

Well shoot, I missed that bit. That does complicate things if trying to rely on thunderbolt. I’ve seen the sonnet adapters before, but it’s good to know your experience with them has been awful.

2 Likes

Yep, I tore them apart and it was definitely the TB daughter card that was fried. The NICs were all still functional. Probably very limited OEM options for those boards I imagine.

1 Like

Yes - I’ve messed with bonded connections, but think that with 10GbE that won’t be necessary. Since we just have the 4 workstations to connect, my thought process was by having direct connections, I’ll be able to give each computer its own bandwidth to the file server.
Now, 10GbE is overkill, but 1GbE is terrible. More on that later. An order of magnitude makes a big difference for our use case.

@spider099

Not performance problems per se - the GbE works pretty much as it should with direct connections. Each workstation gets 80-100MB/s. However, this is sub optimal for work, and once two or three workstations are working at the same time, the spinning disks (5 drive array) can’t keep up.
So we have network bottleneck, and bottleneck with the disk speed.
Disk Speed will be solved with 6 to 8 x 4TB SSD’s. Throughput we are hoping to solve with 10GbE.

The sweet spot for each workstation seems to be 300-500MB/s - we’ve tested with local drives from 100MB/s all the way up to 2000MB/s, and don’t see much difference once we reach 500MB/s disk throughput. Thankfully, the disk usage for audio workstations is not constant by any means. Playing back sessions in real time uses well under 100MB/s. Its the reading and writing of the single project file (the workstation creates a backup once every 3 to 5 mins - and that file can be 300+MB - DURING playback / editing - and we really feel that bottle neck. Also when redrawing the screen, where its reading sometimes 100’s of small files which contain the waveform images.)

No, each workstation is going to need better than 1GbE. Some of our computers already have 10GbE built in, others we are testing thunderbolt solutions.
And since I really want to build the server (I’ll start a new thread about the build - its going to be a little - er - unique!) to learn as much as I can, I figure I may as well config it with 4x10GbE ports, direct connect and be done with it. (At least two of the motherboards I’m considering have 4x10GbE on the MB)

  1. f you need 4x 10g then you are going to need a substantial investment in disks (6-8 Good quality HDD or some SSD’s to feed 4x 10g of data - besides the pc hardware to go with it) - i’m guessing this will be substantial overkill.

Maybe - this is what I’m in the process of working out. However, given the above information, I’m not convinced its overkill.

I think I did that above.

4 x workstations that each run audio sessions which have project files 30-300MB in size. It is the loading / saving (constantly) of these that seem to be the bottleneck. Playing back the audio (and video) isn’t terribly disk intensive, but we have seen occasions when 2 workstations are using the Synology at once we get disk speed bottlenecks. Even though it measures at 300+ MB/s, the bottlenecks occur with throughput as low as 140MB/s (2@70MB/s) - so I’m guessing it has to do with a large number of files being read/written all at the same time.
(Each session is reading one video file - often about 3-4GB in size / scrubbing thru it, and then there can be 1000’s of audio files in one session. I have not looked into HOW our software reads the files / how much goes into RAM - but I’m guessing it utilises RAM well since the software certainly EATS ram for breakfast at times.)

We will need casual access to the files from other machines as well - probably through a wifi network - though I’ll look into that later. Just for grabbing final outputs and sending to client etc. Very very low bandwidth stuff.

Yes - they have internet access through separate NICs. All the workstations use at least 2 x network ports. The main one has a third via thunderbolt that controls a large format mixing console (very low bandwidth, but for whatever reason it behaves much much better on its own dedicated NIC. Thanks Euphonix/Avid.). We are not on protools - we run nuendo rooms here. Just the mixer control surface uses Eucon protocol.

:+1: For sure! That’s the plan. As mentioned above, some already have 10GbE ports, and some we will add externally using TB3.

For other reasons, we are going to retire spinning drives for our main file server (which is currently the synology NAS. Indeed, it will become an SSD only server - and the old NAS will be used as an extra level of backup. The current NAS drives just are not fast enough - its only a 5 spinning drive raid.

And some of this is what I’m hoping to learn. I think I’ve got a decent amount of it sorted through just reading loads of threads here, but will look at discussing the actual storage array (and what RAID I’ll use) in another thread.

I’ll prob cover this in another thread - but as mentioned, probably 6 to 8 x 4TB SSD’s.

Nice!

Money ISN’T no object, but we are willing to spend where necessary to make things work smooth and with as little ongoing tech work as possible. :smile:
Reducing complexity is my main reason for even suggesting this network topology. Its nice and simple to wire, and given space constraints at this studio, we are not able to increase the number of workstations, even if we wanted to. (Each need their own studio space!)

Yes - we are going all SSD. I’ve looked into Jellyfish - but I think they are potentially not quite right for us - we are kind of one step under most of those machines, and I’m really wanting to learn / enjoy the process of building the server myself. Of course, my business partner will quickly step in and just spend the money on something like JF if what I do falls in a big heap and I’ll be walking around with my tail between my legs for a while…

My thoughts exactly.

I don’t believe 40GbE is available for macs… and we already have SOME 10GbE infrastructure (well, networking cabling that can support it, plus nics in two of the workstations)

What is to administer? Set everything up with its own static IP’s and hit go? Its how its been setup with the current 1GbE network for about 5 or 6 years. Our internet comes through a separate network.

Yeah - we considered the TB route but no-one seems to have had much luck with it. Its easy enough to get 10GbE NICs to the macs that don’t have it.

Yeah, jellyfish is very overpriced, but also probably very plug-and-play with hopefully decent support (never used one so don’t know personally). At a glance, looks like they are circumventing macOS’s problematic SMB implementation with their own app which is a considerable value on its own.

TB3 10BaseT adapters seem to be fine. My negative experience was limited to the Sonnet SFP+ adapters.

You’ll want to check the IOPS specs closely, and watch out for SSDs with much lower write performance than read, or lower sustained write than burst. Consumer (non-enterprise) SSDs also disappoint under sustained load. This is the part where you’re most likely to stumble in your project.

After that, there’s risks of (less severe) bottlenecks in the interface (SATA, SAS), the HBA controller, the PCIe bus, etc, depending on the generation of each one. Everything can become a bottleneck when trying to support 40Gb/sec throughput.

I doubt throughput will ever reach the dizzy 40Gb/s marks. Indeed, around 12-15Gb/s would probably be optimal for 4 sessions that all happen to be needing to access the drives at the same time.
Read time is generally much more important than write… Writes are generally just used when adding audio files to the sessions (mostly one at a time - tiny amounts of data really) and then when saving the session files once every 5 mins.

1 Like

My advice stands even at your lower 15Gb/sec target. You can easily hit bottlenecks if you aren’t carefully checking all the specs, selecting the correct disks, controllers, systems with enough PCIe lanes, or going cheap with non-enterprise gear.

Write speed still matters, even if your workload isn’t write-heavy. Consider drive rebuild times for instance.

1 Like

Yes - I get where you’re coming from.
Thank you ! This sounds like great advice.
I’ve started a thread over in the pc build forum regarding hardware specs!

Correct me if wrong, but wouldn’t something a bit bigger be better for an office?
Like the CRS312-4C+8XG-RM (4x SFP+ and 8x 10G Ethernet) or similar. Just to have more options later on.

1 Like

It’s been a long while since I’ve looked at options, but both mikrotik and ubiquity edgeswitch (which kind of reverses that linked layout) would be good options for more ports for a roughly ~$600 price range last I checked.

That’s my opinion as someone with a simple storage/homelab hobby, but I don’t actually deal with this stuff in a professional environment or expect “support” beyond google. Also, do remember to update the firmware periodically.

Have one of those works great