TrueNAS for Emby/Plex w/ Lots of Users

Hi all!

I’ve spent the last few months diving deep into TrueNAS ZFS pools, Unraid’s solutions, and comparing them to off-the-shelf options like Synology (which is what I currently use). At this point, I’m ready to make some purchasing decisions.

You can read about my current setup here: Homelab Efficiency Improvements and Consolidation. In short, I’ll be retiring my old Xeons and keeping my AMD Epyc systems along with two Xeon-D based systems (perfectly fine for firewall duties). My next challenge is solving my storage needs, which is the focus of this post.

Current Setup: My storage is spread across several Synology systems, along with a few servers running TrueNAS in a wide RAID-Z1 array with iSCSI targets to a Windows host (Emby and Plex) servicing around 30 family and friends. At any given time, anywhere from 2-3 to 8-9 simultaneous streams can be active. Additionally, I’m using Stablebit’s DrivePool software to pool all the NAS devices (RAID5/Z1 on TrueNAS, for Synology SHR) into a single logical drive. The DrivePool software then duplicates my data. While this setup has worked for years, I’m now encountering speed issues under heavy load, and the storage isn’t optimized for my needs. I currently have around 210 TB of total storage, with duplication meaning I only have about half of that usable.

Proposed New Setup: Now that I’ve learned more, here’s what I’m proposing. I’d really appreciate any feedback, as I’m still a beginner with ZFS and TrueNAS.

Hardware:

  • HL15 Case
  • Supermicro/Asrock Rack SP3 motherboard
  • AMD Epyc 7452 CPU
  • 256 GB RAM
  • 15x 22 TB SAS HDDs WD Ultrastar (3 vdevs, 5 disks each, RAIDz1) (2 vdevs, 7 or 8 disks, RAIDz2)
  • 4x Intel Optane Micron Pro 7300 960GB M.2 (mirrored+stripe vdev for special device, to store small metadata files on flash)
  • Dual 10Gb NICs (one dedicated for storage VLAN)

Plex/Emby host will run on a separate, new B650-based Epyc 4004 server with two 10Gb NICs (one dedicated to storage).

I’m not entirely sold on Optane yet, though it seems great. Unfortunately, I missed the fire sale, so prices for the P1600X are now around $100-150. Still, if the long-term performance and stability are there, I’m willing to invest. If there’s a better alternative, please let me know! Since Optane is just so expensive, it seems more productive to try something more cost effective. I opted to go with the Micron 7300 Pro m.2 drives as these seemed to have solid performance and endurance vs the cost.

Questions/Concerns:

  1. Is this setup optimal for my use case? I’d love feedback on the overall design and any potential improvements.
  2. File allocation size: Is the default file allocation size fine, or should I tweak it? I’ve heard that choosing the wrong size can negatively affect the special device. My goal is to ensure my library loads quickly and isn’t bottlenecked by spinning disks, as it is now.

I’m currently using a version of Wendell’s script (via PowerShell on Windows) to check file allocation sizes, but I can’t do it directly on the storage devices since I’m using iSCSI for everything. Going forward, I plan to switch to NFS or SMB (though Emby/Plex will remain on Windows for specific reasons).

Here’s the output:

File Size Distribution:
1k: 14549
2k: 85435
4k: 59985
8k: 17351
16k: 19505
32k: 48853
64k: 66460
128k: 34688
256k: 16305
512k: 9666
1M: 5519
2M: 1952
4M: 903
8M: 155
16M: 348
32M: 803
64M: 1989
128M: 6144
256M: 12318
512M: 16739
1G: 21666
2G: 14173
4G: 4812
8G: 1230
16G: 563
32G: 98
64G: 35
128G: 2

Thanks in advance for any insights!

1 Like

Looks like I very much missed the Optane fire sale, if only by a few months. Optane is EXPENSIVE as heck now. I can’t find any P1600X drives for less than $100 and the 118GB versions are up near $250 on ebay. I’m open to other alternatives, especially if the added cost is not worth it. If it is worth it, still, at those prices, let me know.

Edit: Did a bunch of digging around and seems like the Micron 7300 Pro 960 GB might not be a bad way to go. For sure not as fast in many ways as the Optane, but given my storage is really only for multimedia storage, it probably doesn’t make any sense to overdo it. Any other tips/suggestions of other models to consider would be greatly appreciated. Thanks!

I’ve had 2 of those exact drives running as a special vdev on my backup server for the past 1.5 years or so. They’ve been outstanding.

Thanks @adman-c! Two of the Micron drives, I’m assuming? I didn’t even consider how useful these would be for a backup server but that makes perfect sense…hmmmm. My Proxmox backup server might be in for a treat soon as well, then.

Sorry, I wasn’t clear. Yes, the Micron 7300 Pro 960GB. And yes, it makes the responsiveness of PBS soooo much better, at least if your datastore is backed by zfs, as mine is. Browsing the contents of the datastore is much more responsive, and garbage collect/prune jobs are also way faster. Verification is read/cpu bound, so the speed of those tasks were not hugely improved. But overall a pretty worthwhile upgrade for my pool, at least for the $80/drive I paid back in 2023.

EDIT: I also have a NVME-backed special vdev on my primary zfs pool and I love it there. Just a huge fan of the special.

That sounds great. Can you tell me if there’s anything about my config you’d change? Does all of that seem reasonable @adman-c ?

I think that the main bottleneck to your storage performing bogging down with many users is that you have it on network attached systems over 10gb iSCSI. This will cap your performance quite badly IMO. Is there any cheap way you can convert them over to direct attached disks on an external HBA connection?

Unfortunately no. This was back before I had a decent job so I was using lots of hand me down equipment. In fact, most of the Synology devices are 1 GB. I think I have one or two that are bonded at 2 GB or 4 GB. They are also quite old…many are from 2015 or 2016. I think the newest might be 2018. So yeah, lots of bottlenecks everywhere. Thankfully, we’ll be fixing that with this new set up. :slight_smile: Appreciate your insight!

Is there a specific reason you want to use AMD for the plex/emby server? If any of your users require transcoding, you’ll get much better/more efficient performance from an Intel chip with Quick Sync. The main NAS looks pretty good, although 3 raidz VDEVs with drives of that size might be a little risky for some. If you have backups or a fast enough pipe to redownload in the case of a disaster, then you can decide if that is a level of risk you’re willing to tolerate.

Good question @adman-c! I have an Intel A380 that will be doing transcoding (I also have a P2000 I could use). I’m doing the AMD system for Plex/Emby due to some of the other background tasks that run that are CPU dependant and I find they run extremely fast (and power efficient) on the AM5 Epyc/7000/9000 series parts.

As for the NAS, re: 3 RADZ VDEVs–in all my years I’ve had exactly 2 drives start to go bad on me. Nothing on this server is irreplaceable, more inconvenient if I had to rerip things. I’m trying to balance total storage so it’s either 3 vdevs of 5 drives in z1 or 2 vdevs with z2 with 1 with 7 drives and the other with 8, again trying to maximize capacity. Does the wider option sound better? I feel like most wanted to avoid super wide vdevs.

1 Like

That sounds very well thought-out. AMD is definitely more power-efficient once you start using the cores (Intel is more efficient if your server is mostly idle), and at your number of users the amount of work that’s left to be done on the CPU (audio, subtitles, etc) even with GPU reencoding probably makes having 16 fast, reasonably-efficient cores worthwhile.

1 Like

I’m in a damn near identical similar situation with migrating away from my QNAP and Synology devices towards a TrueNAS solution for the storage and separate system for VMs.

When it comes to the disk layout for the vDevs everything I’ve been reading / seeing seems to imply that more vDevs within the pool would also be able to roughly help with performance?

I am currently trying to decide if its better do a 3-Wide Z1 with 3x vDevs of 18TB drives, or a 6-wide Z2 with 2x vDevs of 18TBs. Both yield the same effective space for me,

Additionally, correct me if I am wrong, but wouldn’t that make it also easier to upgrade or swap disks within a vDev later down the line? *IE: if I want to move to 26TB or something, its easier to do 3x drives at a time rather than say 6x at a time.

Were you also looking at adjusting your ZFS Recordsize value? I’ve seen some places and articles suggest moving it to a value of 1MB from the default of 128KB if most of the content is going to be large files like content for a plex library or other self-hosted content (music, movies, audiobooks, virtualization storage, etc)

1 Like

Ah yes, that was the other reason I was thinking of 3x5 drive vdevs (easier/cheaper to upgrade). Yes, most of what I’m seeing says to adjust the record value for this type of system for Plex/Emby/etc to 1MB, which is what I’m intending to do.

1 Like

Bear in mind that recordsize is a per-dataset setting, so if you have any data that benefits from a smaller recordsize, you can put that in a different dataset with a smaller recordsize.

Speaking of datasets, as someone who made the move from non-zfs to zfs several years ago, I’d try and plan out your datasets ahead of time as much as possible so that if you ultimately decide that you want music and videos in separate datasets, for example, you’ve done that before copying your data into your pool. In my experience there have been little-to-no downsides to having more datasets, other than making your zfs list longer, I suppose.

1 Like

Oh I didn’t even realize that! I’m still poking around TrueNAS as I read things to understand what I am doing.

I just found it under the advanced options for the dataset so I’ll give that a go. I do some file analysis on my media content and find out what works for my big chunky Plex video and my smaller stuff.

I’m definitely trying to plan this out as much as I can before I pull the trigger especially considering the size of my media library I have to move over.

I didn’t realize that either. That’s great! I will keep that in mind for sure.

Yeah, zfs is incredibly flexible. Almost all settings are per-dataset rather than pool-wide. Including special_small_blocks, so if there’s a small dataset that you’d like to have nvme only for example, you can set the recordsize and special_small_blocks size the same and presto-chango, that dataset will be written to your nvme special instead of your spinners.

1 Like

I recently started with my first TrueNAS Scale 24.10 system (HL15 prebuild with 128GiB RAM), so while I some have much practical experience, my research is pretty current. Some random thoughts/questions:

  • Since you mention an iSCSI target for Plex/Emby: Is this for the Plex library? FWIW I’m running the Plex app on a separate machine with SSDs in a Windows VM within Proxmox with the library being inside the VM image. That VM image gets backed up daily to the NAS. Or have a 2nd (mirrored) SSD pool in TrueNAS to hold the ZVOL. (I’d make sure though that the ZVOL’s volblocksize and the NTFS cluster size are the same.)
  • I’m not sure, what you intent the SLOG for? You have almost exclusively read requests, don’t you? And your media dataset should probably be set to sync=disabled.
  • I have similar questions about the special VDEV? Your workload is mostly reads, so wouldn’t the metadata be in the ARC cache? So what is the purpose of that special VDEV?
  • With 8 to 9 streams do you really need 3 VDEVs? An alternative might be 2 raidz2 VDEVs, which also provide better redundancy.
  • FWIW I have set the default recordsize of my pool’s root dataset to 1MiB and the recordsize of my media dataset to 2MiB. Compression is always set to default (lz4). This reduces the metadata size dramatically and helps keeping it in the ARC cache. Since compression is always active, small files will be compressed down to minimum blocksize (with the default ashift=12, this is 4KiB AFAIK) and the overhead - even for non-compressible files - is negligible.)

I hope I don’t come across as overly critical, that is certainly not my intention. I am assuming that there are several things about your use-case and your planned setup that I’m not quite understanding.

FWIWI would favor raidz2 for the additional redundancy, unless you need the performance of that 3rd VDEV. In practice this means that it is much safer to perform a raidz expansion, if you want to add extra capacity.

Based on a discussion over on the TrueNAS forum I’ve set recordsize = 1MiB as default for the pool and 2MiB for my media dataset. In conjunction with (lz4) compression this becomes a maximum recordsize, so small files will still use much less diskspace. Once you go to higher recordsizes though, it seems that performance decreases, presumably due to latency issues.

1 Like

I ended up making some purchases and updated my list. This may be overkill, but I’m looking to make this a very much set-it-and-forget-it setup and have lots of room to expand.