Noob thinking of getting a NAS or JBOD, any recommendations?

Hi,

I have come to the point where all my internal sata and NVME ports are used, have 3HDs on the desk and 2 drives connected via USB.

What do you recommend mainly for programming with far too many small files, git repos etc and a music hobby using my DAW’s Tb’s of samples and lots of vst plugins some of which stream data on playback, so ssd speeds are sometimes required, but if the bulk is stored on the NAS or JBOD DAS I can put those speed demanding ones on internal SSD’s.

I have no need for access on my mobile etc, just the 1 pc, no need for media services, just storage.

As I know Jack about these devices, setup or even how to plug them in to get file access, I need some help from the tech gurus on the level1forum.

Any recommendations?

I’ve watched a lot of videos but they cover stuff I don’t care about, or really need.

The short listers were:

  1. Synology DS1621+ Desktop NAS with 6x 8TB Synology HAT3310 HDD (how does this plugin? Can it direct connect via usb?)
  2. QNAP TS-664 NAS
  3. SABRENT 10 bay hard drive docking station, SSD HDD 2.5 3.5 inch SATA case dock
  4. Another jbod but can’t find again
  5. Qnap TVS-h674T

Went down the rabbit hole of reading amazon reviews, works great to rubbish and never trust the manufacturer opinions on all of them…

Which has left me thinking, screw it, just get a 2tb ssd and Dock it externally when I need it or on my powered usb 3.2hub plugged into my only usb c port on the back of my pc, which already has a 1tb ssd plugged into it.

My setup is an i9 9900k some Asus TUF mono and 64gb of ram.

Anyone got any suggestions?

You want a NAS as its own system that controls the storage and could give you cloud access and all that, while keeping your current hardware as its own separate PC to do work on?

Or are you looking to have your current PC that everything connects to basically move to a larger system that can fit it all in 1 chassis and have room for expansion?

I need more space to be available as a drive or drives to my pc which programs may or may not run from. Work and hobby. I’m not sure what the future will require but just more space thats not as slow as a HD would be good for a long time for me. I backup to 8tb internal wd black and 8tb wd black via external docking bay but audio program backups need their own drives. No real loss if lost I can download them again, but with my internet connection that may take s fee weeks. Took mev3 weeks to download my native instruments software.

I have 12tb pCloud lifetime account which has me covered for the essential backups, restore and sharing.

I have 1 piece of audio software which uses 2tb, another 1tb, and 2 more just like them, and with everything else I have to be selective on what I have installed (I have installers on HDs in draw). My programming work I compress client folders once done and send them to pcloud, which don’t take up much space, many millions of files though between node modules, php vendor packages and python packages, but I exclude them on the compress and if uploaded un-compressed those folders are still excluded from upload.

If I could streamline this, it would be a godsend. Maybe 6 Bay nas 2 x 3 drive raid 5 mirrors?

Or does the JBOD or is it DAS just show up more drives to my pc? Which I can then adjust my workflow around, and adjust my syncback pro drive and selective folder sync’s accordingly.

You can always just buy a used corporate desktop and install whatever OS you’re most familiar with, or a NAS OS like TrueNAS on it.

These have two internal 3.5” bays and sip power

Or add in an hba to a use corporate pc and get a lot more sata ports and a bit better functionality out of a nas os / data protection!

ebay for a supermicro chasis, and run TrueNAS on whatever old motherboards you have laying around (or use your current one when you upgrade)

1 Like

@starglider1 The point is … you have tones of options!

Do you want to diy your solution or prebuilt?

How much are you willing to spend on it?

A used corporate PC will be very unreliable. A Supermicro server will be reliable but loud. The quest for the best DIY NAS casees is neverending., may as well go with an off-the-shelf solution. SSD NAS units start to look very attractive.

With all that, since you do not need to share storage among multiple PCs, you probably will be happier just replacing/upgrading your PC with something that can handle more storage.

Okay, a few primers for ya. NAS devices “connect” to your system via your network, that’s the point of them. You can plug it into your system directly only if you have a spare ethernet port, they won’t connect via USB. A “DAS” is something you would plug directly into your own system typically via USB. Direct Attached Storage are cheaper than a NAS (Network Attached Storage), but most don’t support RAID functionality and many of them are cheap or not properly ventilated so you need to be careful on what you get.

A JBOD is Just a Bunch Of Disks thrown together, there’s no protection or redundancy in this configuration. For that you want RAID, probably RAID 5 for the best balance between storage cost and capacity. A single RAID array is best, you probably don’t want to go making multiple as that wastes drivespace.

Personally for simplicity & reliability I recommend to you the Synology NAS, it’s not hard to learn and straightforward to use once configured. Getting a NAS box is the most user friendly route versus trying to roll something yourself and is the best way gain some basic drive failure protection via RAID. You don’t HAVE to buy Synology drives, most brands will work in it as long as you don’t use SMR type drives, but when in doubt you can stick to Synology’s HDD QVL lists.

I think this might be worth focussing on. In my experience, moving git repos to NAS is terrible, from a performance perspective - at least when you’re used to NVME.

As an example, my workstation’s /home/ is an NFS share on a NAS over 10GbE. This NAS has 32 drives as striped mirrors, on two SAS3 controllers, with ~100GB of DDR3 as cache. Sequential R/W is 3/2GB/s (respectively) with around 1500 truly random IOPS. Latency between the workstation and NAS is 0.12ms reported by ping. Cloning a repo that takes 5 seconds to a gen5 NVME SSD takes 35 seconds to the NFS share. A repo that takes 15 seconds to clone to the local SSD takes over 3 minutes to the NFS share.

Deleting directories with many, many small files also sucks. Best case you can probably delete a few thousand files/second over NFS, compared to hundreds of thousands/second on a fast SSD.

Compiling also sucks over the network if you have lots of tiny compilation units. Generally builds take me an order of magnitude longer if they’re on a network share.

I gave up trying to do everything on a NAS ages ago^. The latency penalty is just too great. Storing large files, backups, documents, even live photo libraries? No problem. But when you get into the “many small files constantly being touched” scenarios, local wins every time.

^ I came close with 40Gb Infiniband. The latency drops another order of magnitude, to a point that it’s not painful to work with git and do general development. But it is still very noticeably slower than a gen3/4/5 NVME SSD (it’s decade-old tech, after all), and getting anything newer/faster becomes incredibly expensive incredibly quickly.

1 Like

Thanks for the information everyone…

RE: Roll your own
That’s something I’ve never looked at and have no idea, I would consider myself an advanced user of windows and work with docker linux deb servers etc but no idea when it comes to networking, security setup, router stuff, routers in general etc.

Thanks for the detailed information, was very informative.

Ah, that’s not good, hmm so no programming work directly from the NAS then, this could also affect samples and presets for my music software. Eg A synth I have called Serum, I have about 120,000 presets for it which are tiny files, of which it takes forever to process them even on my nvme (nearly 10 mins). Which would also rule out those vst instruments that stream sample data on demand, a WD Black is not fast enough to run some of them causing cracking and cutouts (not buffer underruns).

I typed a lot here to work out what I was going to type next and removed it lol.

I think this would work then:

  • C: 2tb nvme (sync my documents folder to pCloud)
  • D: 4TB SSD - Installed audio software (needs ssd speeds)
  • E: 4TB SSD - Installed audio software (needs ssd speeds)
  • F: 4TB SSD - Installed audio software (needs ssd speeds)
  • G: 4TB SSD - Samples and my songs (could go on NAS as SSD speeds would be nice but not essential, which would free this up for those I’ve not got room to install)
  • W: 1TB SSD - Active Programming Work
  • X: 24TB NAS Raid 6 Storage and selective syncing certain folders to pCloud (12TB).
    • Software I don’t want to try and download again due to slow internet wont be cloud synced for backup (which leaves around 6Tb of data needing cloud backup)

Bit of info on pCloud

When I do a format of my C drive and re-setup pCloud it takes 2 to 3 days to just index the files, they really are not prepared for people with millions and millions of small files, their API is throttled to hell, blocks of not that much file name and structure data every 2 to 5 seconds.

But they do have file and folder exclusion capabilities which is why I chose them over one drive, gdrive etc, dropbox can do it with sym link hacks but sod that.



When o when will cheap reliable 16tb SSDs be a thing…


Makes me think, how the hell would this works for Mac pro music creators with more audio library data than me? They must have usb extensions daisy chained with external ssd’s hanging out of every port…


I’ll have to give this more thought…

Thanks for the info everyone.

It is my opinion that having a server is better start than having a NAS. (Dont be pedantic guys, clearly S stands for server). Reason is that you have specific needs that do not involve what is normally expected of a NAS installation. Rather something that can be learned on top of existing knowledge.

What you are looking for is fast Ethernet(10Gbit+) and either sharing or iSCSI (Seems to me that you switch drives around so iSCSI might be more interesting).
Your current RIG is old enough that I would vote against any expansion to it. On a more modern rig going to 8x8x with GPU and HBA would work well.

In a sense you are looking for a drive cage with a decent CPU and RAM + 1:1 network link.
If you instead upgraded you might be looking into just HBA+JBOD.