Best OS for fast storage server

Hello Everyone,

I’m working on a storage server for my small business. What is the best OS to be able to access lots of SSDs at a high speed. The server is being built with a Asus wrx80 board with 2 10gb lan ports and the 16 core 3955wx CPU, along with a bunch of Gen 4 drives. we want to get the most speed out of it over the network because we are using it to store footage for the videos we edit and have noticed that with unraid we get less performance than local. Any advice?

1 Like

With 10 gigabit NICs you’re only going to get 1 gigabyte or so per second of throughput through each one at best. Which is WAY slower than a single gen4 SSD. 10 gig networking is really quite slow these days in the context of modern SSD performance.

Even if you do link aggregation that can generally only enable you to get more throughput via multiple concurrent streams. i.e., you’ll be able to get multiple users to hit the server at once and get a 10 gigabit share of the throughput each.

I very much suspect for your case you will want to go to 40 or 100 gig networking if you want to make things go faster over the network for individual streams (which means on the client machines as well! $$$). Or just live with the fact that without > 10 gigabit nics you’ll be limited to 1 gigabyte/sec to the clients - if that.

Network really is the bottleneck for modern high speed SSD storage servers. You may be better off putting gen4 drives in the workstations (for local caching of projects whilst working on them) and putting a large number of SATA/SAS SSDs in the file server as you’re going to likely be network bottle-necked to it anyway (and treat the server as a content library and/or repository for completed projects).

What I’m saying above i guess is that its a very real possibility that your perceived performance problem isn’t the OS. It’s a limitation of your network. Less performance than local with only 10G networking is to be expected.

For a small business with a small number of clients, multiple PCIe gen4 SSD in a storage server is going to be difficult to make use of (speed wise) without a comparatively huge investment in the network.

Great for VMs running on the same server where the server is accessing the storage locally, but for NAS duty… overkill unless you’re on 40-100 gig network.

5 Likes

The one you know how to use. Unless you have an employee dedicated to handling stuff like this presumably you need to focus on running your business and not learning how to do esoteric tasks on an OS that is completely new to you.

In answer to your question probably FreeBSD because this kind of thing is what ZFS is for.

3 Likes

There is also the difference between moving a large edited file or live editing that very file over the network. The latter necessarily is much more reliant on latency and low overhead. Dragging a 50-10G file to the “network folder” just requires proper networking and corresponding pool write speed on the receiving end.

Network is always slower than local. Limitations of lightspeed and necessary layers and protocols needed can never achieve the latency and responsiveness compared to storage that’s directly connected to your local CPU.

And I agree with @thro , if you want faster than 1GB/s copy of files, get 25/40/100G networking. I’m not sure if that 16 core CPU can handle multi-Gigabyte streams with compression, but this may become a bottleneck as well. We also don’t know about your pool configuration, filesystem, redundancy, etc. so that might be a problem too, as parity raid configurations are infamous for their bad write performance. And consumer drives exceeding their SLC/DRAM cache quickly, brings you back to the realities of sustained write performance of NVMe SSDs and very obvious performance deficits.

This. OS isn’t important. Definition of what you want and need and then planning accordingly, is.

3 Likes