Video Editing NAS for 5 editors - 10Gbe - FreeNas/TrueNas/Unraid

Hey guys! :wave:
First of all, thank you very much for all the good information and help here. With this I built my first Unraid server 4 years ago. At that time it should be used as a video editing NAS, Plex Server and for other applications. That has now changed. Shortly afterwards I founded a small video production company and we had 3 editors using it at as a NAS only. (RIP my Plex Libary :rofl: )

Upgrading to 10Gbe Nics + Switch helped a lot and everything worked surprisingly fine. But with higher bitrate and resoltion footage + adding a 4th editor, I think we hit the limits of Unraid, mainly because of a missing read cache. With high bitrate footage, we now get poor performance while editing. All Editors using Adobe Premiere Pro.

So I looked back into FreeNas / TrueNas and OpenZFS, read a lot of forum posts, watched videos and tried to do my resarch. L1T, LTT, 45Drives … found greate information everywhere but I am now a bit overwhelmed and would ask for your opinion. What was your experience with ZFS? What is important for video editing?

Briefly to me. I feel more at home using a GUI. Have already built many PCs, exited for enterprise stuff but not many practical experience with it. The new server should co-exist to the currently used Unraid Server.

I came up with the following system

Mainboard: Supermicro X11SPI-TF
CPU: Intel Xeon Silver Prozessor 4208
RAM: 96 GB (6x16GB) DDR4 2400 ECC REG
FAN: Noctua NH-U14S
CSE: Inter-Tech IPC 4U-4410
PSU: 800W (have one lying around)

2000GB Patriot Viper VPN100 M.2 2280 PCIe 3.0 x4 NVMe 1.3 3D-NAND TLC

5 x 10Tb Toshiba Enterprise Capacity MG06ACA10TE 256MB SATA 6Gb/s

My plan was to use the NVMe as L2ARC and create one vdev RaidZ-1, giving me around 38Tb of usable storage. With the option to add a second vdev with another 5 drives in the future.

This configuration costs approximately $ 3850 (3200€ in Germany).

Am I missing something? Would you recommend something different from your experience?
Is it worth to use 32GB Ram Sticks and double the RAM to 192GB?

Thank you for all your input and would be happy to share my new server, the construction process, the performance test and my experiences with you :grin:

Im no ZFS expert by any stretch of the imagination. Nor do I edit videos. However I don’t think adding L2 ARC or adding more RAM will give you the results you want.

Mechanical storage just lacks the throughput to satisfy such a work load without scaling up to a huge massive pool drives. I feel like a nice big pool of SSDs for projects you’re working on along with common assets you use everyday could help a lot more. The mechanical storage pool could archive videos or other data you don’t need to always access.

Hello there MaxWo

I would recommend company name “jellyfish” if you don’t wanna get custom and overwhelming with what to choose. they can build exactly for your needs specific expertise is in video sharing they sell tower and racks with their own software and support .

if you get custom all by yourself Unraid is recommended its cost money but more features full and you can’t go wrong, even if you grow bigger.
(if you don’t want to pay anything FreeNAS also great but because its free it has their limits )

I hope made things more clear for you :slight_smile:
Have a great weekend

Well linus uses windows and 25G :slight_smile:

Still as much ram as you can manage will be best. Videos are so large. Nothing beats ram.

There’s a learning curve to ZFS/TrueNAS and I don’t recommend learning on equipment/data that are critical to your or anyone else’s job.

Synology is usually the safest option for people in this group:

No matter what you choose, if you have to use hard drives, you should go with striped mirrors (raid10). If you had dozens of hard drives or SSDs, then striping multiple raid6/zraid2 could work, but otherwise the performance isn’t going to meet your needs.

Yeah, the Lumaforge Jellyfish is also a good option although the markup is pretty high.

I feel like if the OP had jellyfish sort of money, they wouldn’t be here asking about building a TrueNAS box…

LTT did a video about LumaForge after several YouTubers went out and bought a jellyfish after he built them giant expensive storage servers. Basically it came down to if Linus knew they wanted performance over capacity, he would have built them very different boxes.

2 Likes

I’d steer clear of RAIDZ1 personally.

Especially with 10 TB disks if you have a failure you’re going to be waiting a long time for rebuild and have a large window for double-disk failure (and thus loss of the dataset). Go for at least RAIDZ2 per VDEV or maybe consider multiple mirror VDEVs to get more VDEVs and more performance. But you’ll need more disks to get the capacity.

Most array vendors have not been recommending RAID5 or variants for drive sizes larger than 1 TB for years.

edit:
i also doubt that 10TB SATA drives will keep up - especially only 5 of them. Sure cache will help reads, but… eventually stuff needs to hit the disk.

I’d consider splitting the unit into two pools - one for archive with 10TB drives and an SSD only pool for “scratch” or current projects. Maybe look into using SSD as a mirrored log device to help with writes. But again eventually it needs to get to the disks - and they’re slow. You’d need to tune the log device to be asynchronous for speed, which comes with caveats.

I’d try to get some metrics from the current box, if you can regarding read vs. write ratio so you can make some judgements about how bad the write bottleneck is, if any.

Maybe try it with the SSD cache and see how you go, we don’t know your exact workload and can only speculate.

I’d be prepared to add an SSD pool if performance proves insufficient however (and use the spinning disks are archive).

edit:
see here for log device info. TLDR it won’t magically give you permanent SSD write speed, hence i feel you may need an SSD pool to keep up if writes are also a thing

Also note in that article your write penalty for RAID-Z vs striped mirrors. If performance is a thing, really, really try to avoid RAIDZ if possible. Even if you aren’t write heavy, the write penalty overhead every time they write, is time the disks cant spend reading.

RAIDZx is great for capacity with resiliency, but sucks for performance. And sounds like you’re chasing performance.

TLDR: look up these things for yourself and try understand before committing money

  • what percentage of your workload is read vs. write. if a write costs you 4x as many IOPs as a read (for example), then even if write workload is 30% it can have a significant performance impact to your underlying storage.
  • ZFS vdevs vs. storage pools and how different VDEV configurations perform (you should be able to calculate rough IOPs and throughput of X spinning disks in various configurations. this will be excluding cache but will give you some baseline for the underlying disk storage. ultimately at some point you run out of cache or the stuff isn’t already in cache - and you hit the disk)
  • ZFS l2arc vs. slog and their limitations
  • the performance characteristics of SSD vs. SATA rust.
  • make sure to turn off access time tracking (atime). otherwise you’re generating zfs writes when a file is read-accessed :smiley:

I’ve played with ZFS for a decade or so, but most of my enterprise experience is with netapp, equallogic and purestorage.

In my experience, flash (SSD) cache works well right up until it doesn’t (workload doesn’t fit in cache any more). Then it falls off a cliff in spectacular fashion.

3 Likes

To be completely honest i wouldn’t even try to start without complete details. Then before I do any work I’d charge consultation fee. Video storage, live such as yours esp, is an expensive niche that got muddied with amateurs over the years, not worth doing for free when there are others who do charge nothing already came up with bad solutions.

1 Like

Uuugh. A video editing server is hard to do right. Unless the editors work decentralized, ie: store the raw footage on the server, copy the stuff on their machines and edit locally, then upload the final project on the server, then you need a heavy lifter.

Disclaimer: I only have experience with NAS for virtualization, which isn’t so hard on the storage. I will give you a different approach, how I would do it. I’m not sure this is the best thing to do, but I’d rather get the 32 GB DIMMs for that sweet ZFS cache, leave everything with the defaults, then do a stripped RAID-Z2 vdev, using 6 HDDs / RAID-Z2, so 12x 3.5" bays. This way you can lose up to 2 disks in each vdev, so if “the right” HDDs fail, you can get away with up to 4 failed disks. If you have the budget, I highly suggest you get a storage with 24x 3.5" bays capacity, so later on you can just add 1 more stripped RAIZ-Z2 to the mix (so you’d have 4 vdevs total).

But I’m very worried about your backup solution and I doubt the above build fits your budget, so I think an alternative would be building an 8 disk server with ZFS stripped mirrors (RAID 10), then at a later date build a second one with 6 or 10 disks in RAID-Z2 for backup. Backup, backup, backup!

2 Likes

Thank you all of you for your time and effort to help me out. I really appreciate it.
With the new information I am now reading even more and looking at my options.

Jellyfish starting at $ 30.000 with “only” 80Tb … I know, if I had absolutely no idea what I am doing, I would go with a solution like this.

We have a greate cloud backup solution, but RAIDZ2 would definitely make sense.
Perhaps with the existing Unraid Box as a nightly backup.

At the moment I don’t have my back completely to the wall when it comes to storage space on the current Unraid box. We are currently copying the files locally that cause performance problems (Unfortunately that is not a solution for eternity).
Therefore I have planned to test and optimize the new server solution for 2-4 weeks before any critical data comes on it.

I think i will start with your suggestion @ThatGuyB and order a couple of drives and pair it with spare parts that I have lying around before I get the actual hardware. I have now read and learned so much about ZFS in theory, I think I just have to work it out (and sooner or later I need hard drives one way or another).

Thank you again. I will definitely post my results as soon as I have achieved something useful :grin:

1 Like

Good luck. Also check out my post here: ZFS RAID Config for old disks
where I explain a few stuff I learned around when dealing with my Proxmox box. And check out the links in the OP to Sarge’s explanations to the other, older topic, he does a great job explaining lots of ZFS stuff. There’s lots of things to learn. TrueNAS Core or Proxmox should be fine for what you need I think.

I think you need to be realistic about what you’re keeping on hot storage vs your budget and maybe consider multiple storage tiers. Maybe if you can be disciplined enough regarding checking all work into (or backing up to) the archive every day even make the fast tier just striped SSD to get better capacity and speed vs cost.

Trying to build one high capacity box that is also fast won’t be cheap - and a handful of high capacity drives simply isn’t enough. The problem there is that additional controllers and enclosures to house enough disks aren’t particularly cheap unless you go used and then you need to be careful and deal with the setup complexity.

It may be best bumping your workstations with a heap of SSD and having them work local (after copying content) but use the NAS as a content repository that they can check in and out of.

At least bulk copies are all sequential read and spinning disk isn’t too bad at that.

Also, unless you’re going multiple 25 gigabit or faster networking (also not cheap) actually working from local m.2 will be way faster than over the lan.

Edit: level1 did a video editing ZFS box using a few old enterprise disk shelves. Check that out. If you’re including disk in this you need spindle count. Think more like many 300-600 GB SAS drives rather than few 10TB SATA to get the spindle count and thus throughout.

Or bite the bullet and make the live pool all SSD and relegate the 10TB drives to archive storage. It’s all they’re good for really.

Agreed.

Also, one note idk if it was mentioned. Use a large record size. It will benefit you on sequential data and reduce metadata memory requirements.

3 Likes

Whoever was recommending Jellyfish, watch Linus’ new video. :wink:

Seems like Linus did 2x 10 HDDs stripped RAID-Z2, which is also what I recommended, but with 6 drives (to not get a performance penalty*), but he also added SLOG and L2ARC. I stayed on a lower budget, but hey, should still be very close to a Jellyfish.

Edit: *by performance penalty, I mean from not having multiple of 4KiB sector size.

2 Likes

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.