Advice on building fast central storage server using unraid and a driveshelf

Inspired by the recent Wendell / GN unraid zfs video, I decided I needed to take the leap. My goal is to have a central storage server running unraid, with a drive shelf with ~ 36 drives combo ssd/rust/nvme. I need the client computers to be able to access this data at faster than ssd and as close to nvme speeds as possible.

This is for an audio setup that has a master machine, and multiple slaves which all load audio into samplers in ram. Currently I have to have large fast storage on each slave and massive 20tb+ fast storage on my master rig, and the goal is to put everything onto one central machine, and then the slaves and masters all have “normal” storage and access everything lightning fast from central machine.

Its easy enough to figure out alot of this but here are my main questions.

  • if purchasing a driveshelf to hold all the rust drives, it looks like alot of them use a SAS controller cable. Is this the best way to connect the drive shelf for speed when using 24 - 36 drives? I assume then you buy a multiport sas pcie card to connect them.

  • My current unraid server with 13 drives is nowhere near fast enough to work from over 10gb, but Im not sure where the bottleneck is and Im unsure how to figure out where the bottleneck is. using 10g connect from my master workstation to server I get 105MB/s read / write. So if I build a nice fast server with driveshelf, and connect to it using a 10g nic will I still get these pathetic slow speeds?

  • What is the ideal way for my client and master computers to connect to this central drive server for fastest speeds? Should I just jump up to 40gb? Will cat 7 cable work for 40gb? my cable runs need to be about 20 meters.

I appreciate any advice and suggestions. Cost is not an issue if we are in the under $4k - $5k range for solutions that really work and are reliable and impressive.

cheers,

Yes you need a minisas HBA. Model will depend on the disk shelf but generally compatible models will be easy to source. Most shelves use either one or two ports and aggregate the data for multiple drives in an expander card built into the shelf. Check specs for the specific model you are interested in.

Hard to say as the bottleneck could be anywhere. I get 350MB off just 6 slow 5400 rpm drives using Freenas. It may be many factors and you will need to test and balance to find out what is wrong.

Define “fastest”. Also what is fast enough. If your throughput is to a single nvme drive on the host then 10GbE is enough. 40GbE is overkill in a single user home setup and needs different hardware (sfp+). May not be worth the added cost but I don’t know all your use cases.

thank you for the great answers. in regards to speed in loading to clients. The files pulled of the central NAS to the client will go straight into RAM so, they would not be limited by the drive speed of the client during transfer presumably. I guess a reasonable goal would be faster than SSD , but realistically not as fast as m.2 gen 4 , so maybe 1000MB - 2000MB a second? Each client is loading 128GB of data into ram, and holding onto it, then it swapping bits in and out off disk hits to this central network drive.

Your speeds off 6 slow drives in freenas makes me really wonder what my current bottleneck is, and that I should figure this out before changing to anything new. will do some more testing and research today.

thanks

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.