Building a "fast" custom NAS

Hi there!

First time posting, sorry If I sound too dumb.

If I want a high performant, but “tight” budget NAS (around 10kUSD) does the following sound like a plan? or wouldn’t work…
Wanted to check with you guys, that know a lot more… to avoid get hurt in the process.

  • FreeNass
  • some nice Ryzeen CPU with ECC support.
  • some unbuffered RAM.
  • some motherboard with enought PCIe ports.
  • get some PCIe M2 expansion cards.
  • fill them with some nice M2 Pcie3 possibly.
  • get some NICs ( from 25 to 100Gbe ) depending on price.
  • some HDD for big but slow volume also.

SMB, FTP, and wiki services Is what we use the most.
The use is gonna be Server for a 2 ~ 4 workstation for 3DCG work + some remote workers connecting to the FTP eventually.
Just for the record, we currently are using a Qnap solution 10Gbe, populated with samsungs 860~870. not bad… but some of the loads we do would apreciate more snappines and faste transfers.
Especially when compositing when it requires really fast transfers of frames of 50 ~ 400Mb each.

Because I don’t know much about FreeNAS or actually building a custom NAS, would it be so complicated for my use case?
What basic problems could I run to? Or what should I take care with.
Speed transfer and Latency is very important for us would be a problem?
Would ZFS affects, in a bad way, to the performance?
What should I take in consideration hardware wise?
Should I avoid myself the hassle and stick with some pre-built solution even if it doesn’t have the ideal M2 pcie slots… and utilizes SATA ports too much?

Sorry for the so many questions.
And thank you in advance!

1 Like

Yes, that’s quite feasible. Actually, some time ago Linus (LTT) made a video on how to save 45k for a similar spec’ed machine to the commercially available Jellyfish as well as one outlining how much you can get for the money a Jellyfish actually costs. (60k+) Base your system on this:

You can save on the network cards, the CPU/mainboard, storage drives and RAM, although the latter as a final resort.

Dual port 40Gbps 8-lane pci-e 3.0 nics are very cheap these days.

How much total space do you need?
How much low latency space would you like per workstation?
Are your numbers future padded?

I’m asking because Chia made hard drive prices double what they used to be 2 months ago. So it might be cheaper to get a some kioxia CD6-R or other cheaper m.2 form factor into workstations for local low latency scratch space and keep the slow high capacity NAS.

1 Like

Hi again.

Thank you for your comments.
I gave a look and althoug is not a complete tutorial it actually shows is possible.

That is a start…
I liked the part of not using a switch… for me was a big trouble to find a reasonable sized switch for connecting few computers that can work with 100Gbe for instance.
But If i can use the server for that matter… ( I guess I should go for PCIe v4.0 x 128 lanes ) so Threadripper or Epyc ( rome or milan )
That would be a nice solution I’m not sure if has other implications or possible problems to use it this way.

I’m gonna make some parts lists… investigating properly and come back here for another check if you guys have the time to comment.

About the availability looks like in Japan (where I am ) the hit in prices still doesn’t reflect that much from what I could see.

I don’t need a lot of fast space,
12 TB for the fast ( SSD nvme) aprox
24 TB for the sllow ( hard drives) aprox

For the fast machines I would love to have more than 25Gbe. 4 machines.
For rest of devices can be 1Gbe or 2.5 or 10…

If I can avoid the switch… 100Gbe feels more like a real option.

Well Thank you again for your help, I’ll make some numbers and come back to you guys :smiley: Thankyou!

I finally got some Mellanox X5 100G Pcie 3.0x16 and a DAC.
And gonna do some tests with them.

Not entirelly sure if the overcost of purchasing 100Gbe cards is gonna be worthy… ah…

For the Storage I have some Aorus GC-4XM2G4 , the documentation says it only accepts 2TB drives… not sure if that is totally mandatory or just a recomendation,… but I don’t own any 4TB to test at the moment… so I guess I would need to get another one.

8 M2 Nvme drives in 2 different cards could be RAIDed toguether?

  • For the CPU I was planning the least powerfull Rome Epyc: AMD EPYC 7252
  • For the MOBO asrock rack ROMED8-2T, plenty of expansion.
  • KSM32ES8/8HD (8GB ECC) x 4 for start. for the RAM.

For the NVME i’m a bit confused too,… speed will add up by the RAID but what about IOPs should I focus better on getting high IOPs than transfer speed?

How you guys see the components overall?
I’m I doing some beginner mistakes? would you recommend something else?
Thank you!

You very much can build a fast custom NAS for under $10K. Recently did similar, though for a very specialized application. In my case, had to capture continuous incoming data streams over 5x 10Gbps Ethernet (though each only ~~1/3 of the link rate). Used Samba on Centos, with an LVM striped array of SATA disks. (Disks have to be removable, and the NVMe drives had not really landed when I designed.)

Also played with using an M.2 drive as a write-back cache to a striped array of spinning disks (again, using LVM), and got impressive performance. You might want mirroring or a fancier RAID. Or do periodic backups to attached storage. All depends on the workload (and your comfort level).

2 Likes

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.