I’m currently looking at building a new homeserver and having a dedicated NAS device, I’m trying to figure out if what I want to do for the *arr stack/rtorrent is possible:
Basically, I have 10 Gbps internet to the home and want to make sure I can utilise as much of that bandwidth as possible for new downloads.
In my mind what I want to have happen is this:
*arr picks up the new release, it’s downloaded to an SSD on the server
as this drive fills up or seeding targets are met the download is moved from the server SSD to the NAS device (UNAS Pro 8) and deleted from the server SSD, but seeding continues and the file remains available in Jellyfin/Plex etc.
I know Unraid has a scratch disk or something along those lines, but I wasn’t really planning on using Unraid as the Server OS, and would that even work having a separate NAS device?
Whats my best course of action here, should I just be content with the UNAS Pro 8 being the direct download target or is there a solution?
Maybe setup a Ramdisk as a cache?
Or have a dedicated ssd for caching the array?
I know ext4 can be setup to use a seperate block device as a write/journaling cache.
Sure, so this is called tiered storage in the enterprise world and it sound simple but the implementation details are a real bear. I don’t know that anything commercially available exists off the shelf for you. Here’s the trick to making it work:
At some fundamental level, you need enough NAS performance to download at that full 10Gb/s or else the limiting factor is always going to be the NAS. But your workload isn’t constant. It’s bursty, so what you want is to download one file at 10Gb, then let the connection be idle for ‘a while’ and then have room on the SSD to perform another full 10Gb download.
So the most important feature of your tiered storage solution is destaging data to NAS to assure enough free space to hold the new download.
“How?” is entirely up to you.
Least recently used - keep a list of file access times, and destage the oldest file(s) to NAS until some minimum amount of GB are free on SSD.
Biggest first - Destage the largest file(s) on SSD to NAS, in order of largest to smallest.
Constant background offload - All data on the SSD is written to the NAS and deleted from the SSD once the copy and a verification step is complete.
File extension based: .mkv files are offloaded immediately, but .iso files persist for a longer period on SSD.
Folder based mix: Some folders might be constant offload, while others are biggest first, or LRU, etc.
The thing that makes this work is being able to predict how large the next download is going to be. If you never download anything over 200GB, then keep 200GB free. If you never know, come up with some average value that is normally big enough, then tie in your download software to your tiered storage solution. When the download software reports that the download is larger than the free space, the SSD immediately starts destaging data to the NAS to make room for it. That is easy to type, but might be incredibly hard to do in fact.
This is actually how the arrs are set up to work out of the box. Basically just configure whatever you want to be your completed directory for files from your download client on the SSD and then set the respective arr to look there for the release. It should copy it to the appropriate directory on your other storage and the download client will delete after you hit whatever ratio you like. For a short time the file will exist in 2 locations.
You dont need to do anything fancy with tiered storage or OS configuration.