Questions About ZFS vdev Performace vs Storage Loss to Parity

Ok, so there is a lot of data that I’ve been trying to process.

Now, the reason I’m here is because I am building an, admittedly overkill, PLEX server based on TrueNAS Scale.

I have a rack case from Sliger with eight 5.25" bays all filled with Icy Dock MB608SP-B 6x2.5" drive cages.

The hardware is an Asus P12R-E-10G, an Intel Xeon E-2388G, 128GB ECC RAM, and two LSI 9305-24i HBAs.

My boot drives are two Micron 5400 Boot M.2 SATA drives.

I am contemplating two Intel P5800X, or possibly two 905p, SSDs for a special vdev, but I’m not sure if it is necessary for mostly large video files. And, I want really quick response during navigation and playback. I am still learning about all of this.

So, here’s how I understand this. The more vdevs, the faster the pool because it’s basically creating a striped volume of the vdevs. I am looking at Z2 vdevs. So, I’m considering eight Z2 vdevs of six drives each or four Z2 vdevs of 12 drives each. Obviously there are other options like four or 24 drive vdevs, but I want the best balance of performance and loss of space to parity. What is the point where the benefit of speed is less than the cost of parity?

I have been buying bits and pieces for this server for almost a year. Right now I have 36 Micron 5400 Pro 3.84TB SATA SSDs and I buy a few more with every paycheck, so I almost have everything I need to build the server. Just wanting the input of some people who know more about this than I do.

To be clear, I know of all of the different ways I could have done this and everyone’s opinions on what would be better or worse, that is not what I’m asking for help with. I am excited about this server. It’s the server I want. I won’t criticize your choices, so please I’m just asking about the ZFS configuration and appreciate any help. :slight_smile:

Thanks!!

1 Like

Obviously the four z2 vdevs would be better for available space vs parity.

all together, is is a rediculously over-powered machine for plex, and it being your money, you are welcome to do what you like with it.
(I am a little jealous because I could never justify that kind of money on that kind of system for that purpose.)

But realistically, how many 100gig network links are you going to serve?

like, one vdev will saturate a gigabit link, maybe even 10gig? edit, looks like one vdev, might only Almost saturate a gigabit link, if calomel blog still holds true

I have no idea on the actual numbers, but yeah, I don’t think you need worry about performance of SSD’s in vdevs

I understand, and expected this kind of response.

Basically, I needed QuickSync and I didn’t want a CPU with P & E cores. So, that led me to the E-2300G CPUs. I chose the E-2388G because it had the fastest iGPU and I am also going to be running some other services on that server. It’s not just PLEX, even though that is the main purpose.

This will be my fourth PLEX server, and I’ve had less-than-ideal results with 4K HVEC transcoding due to poor hardware decisions and the fact that my whole family uses the server w/ up to six streams going out on any given evening. I rip my own Blu-rays and I do 1:1 copies without compression so I need fast throughput for local streaming to my Shields and fast transcoding for streaming to external devices. I really want it to feel like a streaming service with minimal loading times and super fast browsing.

What do you think about using an Intel Optane drives for special vdev for a use case like this? I’ve also considered using an Optane SSD for the PLEX data as well. But, I only have enough PCIe lanes left over to do one or the other.

This server is my ULTIMATE PLEX build. It’s not reasonable at all. But, I’m really excited about it and I’ve been slowly purchasing the parts over time so it’s not like I’m just dropping thousands all at once. It’s the whole focus on the payment thing the car salespeople do. I could never drop this kind of green all at once for something like this. lol

1 Like

It’s your toy, you get to do what you want with it.

Not judging.

It sounds like a lovely system.

I’m hoping someone who uses the special vdev knows of any gain / bottleneck on running them with this kind of setup.

Trust and believe, I’m going to be doing a build video and posting all kinds of pics when I finally get it all together. Right now It’s just a stack of boxes in my closet. lol

I would be interested in any input on the special vdev benefits for this use case if you or anyone else has any. From my research, it seems like it’s more beneficial for pools with a lot of smaller files. I know I don’t need a cache because it uses the RAM as a cache and the drives are plenty fast enough. If I were to drop the cash for a few low-latency Optane drives, it may be better to use them for the PLEX data and perhapse a separate drive for transcoding storage. You can get those 905P drives for pretty cheap while they last. It’s tempting.

1 Like

Holy water… 48x 2.5" ssds…

I have this bookmarked for these kind of questions.

Your cluster size and number of vdevs matter, unless you want to take your time and tweak such stuff. I wouldn’t bother and just go with defaults.

tl;dr for ZFS RAID-z2, do 4, 6 or 10 drives. I’d say go a stripped RAID-z2 with 10 drives in each. I wouldn’t go further, because the bigger the single z2 pool, the more likely it is to lose the entire pool due to a failure of more than 2 drives.

https://docs.danube.cloud/user-guide/storage/redundancy.html

I like the wintelguy’s raid calculator, because it shows the % capacity of the raid config.

On a RAID-z2 with 10 drives, you get 80% usable space and still get to be in the clear with the redundancy (although kinda in the yellow zone). If you want slightly more redundancy, with 6 drives per pool, you get 66.67%, which is barely better than stripped mirrors.

For the sake of argument, with a RAIZ-z3 with 11 drives you get 72.73% usable storage and should be in the green (having 3 drives fail in a single zpool around the same time would probably be unlikely, unless the drives are known to have a manufacturing defect - always have backups!).

With 40 drives with 10 drives per z2 pool, you’d get a 4-way stripe, plenty fast for your needs. And you get to lose up to 8 drives in the right places. With a RAID-z3 with 44 drives, you still get a 4-way stripe, slightly lower usable storage, but can lose up to 12 drives in the right places. I recall z3 having less performance than z2, so go with whatever you think you want (redundancy vs speed). With a 4-way stripe, you’d be kinda hard-pressed to see the differences in real-world stuff (especially with ssds).

I’d buy at least 1 spare for each pool + 2 additional drives (cold spares). So for stripped RAID-z2 with 40 drives, get 46 drives. Again, don’t plug them in the available slots, just keep them in their packaging boxes.

With this much SSD, I wouldn’t. You’d be just getting peanuts with the 3d-xpoint stuff, compared to the pool speed. The latency wouldn’t be noticeable real-world (like browsing plex). A metadata special device makes sense for HDDs, because all the file structure / FS hierarchy gets moved to low-latency SSDs, so you only need the drives to seek data. But with SSD pools, that’s basically meaningless (unless the pool is already overloaded trying to serve data, which would make browsing stuff feel faster, but accessing it slower).

For such a large array, with the 3.84TB drives, you only get 122.88TB usable capacity. With 12x 16TB HDDs (stripped RAID-z2 of 6 drives each pool) you’d have gotten 128TB for cheaper and managed to sneak a metadata special device pool in there and probably still be fine (plenty of performance to be drawn from such a config).

Not sure if the power draw would be similar for 12 HDDs vs 40 SSDs. Assuming 15w per hdd and 5w per ssd, we’d be getting 180w for hdds and 200w for ssds (which is a big assumption, spinning rust doesn’t rotate 24/7 and neither do ssds run at full tilt all day long). Might be getting a bit better power consumption with ssds and don’t have to worry about heat dissipation and performance trickery as much.

1 Like

You don’t need a SLOG or a special vdev since you’ll use SSD’s. A SLOG would be beneficial if you had HDDs and it would help you when you need synchronous writes (Databases and NFS writes). But if you did need one then Optane would be good because it has a Power Loss Prevention feature as described here

So in the end how many Disks will you have?

For Plex you won’t need nearly nothing as fast as what you’ll have so whatever you do with performance, however you try to break it it will still be fast enough. Plex just uses sequential reads and any slow HDD will do, what it does need is CPU for when it does transcoding if that is needed. If Direct play is used then CPU isn’t needed.

Since you now have 36 Disks I would create 3 vdevs of 12 disks each and in raidz3 config. That will give you piece of mind when a disk dies you take it out put the new one in and begin resilvering and while doing that another one dies but you’re still good. Since you have so many no need to be stingy on redundancy and use raidz2.

The other user did say the performance will be better with with multiple smaller vdevs and yes it would be but you don’t need that. You should make a ZFS pool in accordance for the need and not just for the sake of over killing it. This overkill would be good if at some point you’ll experiment with it, test it to it’s limits etc but just for watching movies it isn’t needed. So focus should be on redundancy and space, not on performance.

So anyway for the future you would plan by now setting up the vdevs and when you buy more disks then you would add another vdev with the same specs. So in my example of a 12 disk vdev you would have 3 vdevs now and then you would have to wait until you buy another 12 disks to add a new vdev to the pool. If you make them 6 disk vdevs then you could wait for when you have bought another 6 disks etc.

I don’t like the 6 disk raidz2 vdev option in this case because if you have 36 Disks you loose 12 Disks worth of space for redundancy. If you use 12 Disk raidz3 vdevs then you lose 9 Disks for redundancy.

What about the HEVC transcoding issues. Did you burn DVDs in H.264 of H.265? I have no problems with Plex on a single HDD on a old Dell Optiplex. H.264 codec was created for a CPU not a GPU, so GPU won’t help you. H.265 I think uses GPU so it will help. If you use H.264 and you use Direct Play it will be faster, yes it will use more internet bandwidth but it won’t transcode. How is your internet link?

If you have a H.265 video file and your phone or browser doesn’t support that coded then it will transcode, but if the codec was already in H.264 and the browser supports H.264 it will not transcode. But it browser or phone supports H.265 and the video file is already in H.265 then it won’t transcode and it will use less network bandwidth.

So if everything is good H.265 is playing with Directplay no transcoding it will still inheritantly be inferior because of how it’s made in terms of picture quality. It’s like MP3 vs lossless.

For sure all of my internal playback is DirectPlay. The reason I got the Xeon E-2388G is because of the iGPU. It is a beast for transcoding H.265 for external playback.

And, I don’t transcode my Blurays, I basically just rip them into an MKV wrapper. It’s a 1:1 copy. I keep the English subs, Atmos/DTS:X, 5.1, and stereo tracks if it’s on the disk. I know there are better ways, but I’m stubborn and want the best quality possible. You can argue until you’re blue in the face that you don’t see a difference at whatever setting. But, I am who I am. lol

As far as the rest of what you guys have provided, I’m going to have to absorb it and get back to you on the vdev options.

I had figured the special vdev wouldn’t be effective in a setup like this, so I’ll probably grab a few 905P SSDs while they’re available and cheap to house the PLEX data and transcode scratch. The P5800X SSDs wouldn’t make sense even if they were less expensive just because the remaining eight PCIe lanes are only 3.0.

That will only give OP 75% of usable space and be improperly balanced performance wise. 128 / (12 - 3) = 14.222(2) records, which means worse performance.

OP said will also be running other VMs on it, it will not be just a plex server. Read the entire thing and understand the requirements. :wink:

I don’t like this either. Which is why I recommended a 4x 10 drives raid-z2 and I stand by it. 80% usable storage, plenty performance for other VMs, good enough redundancy. If worried about redundancy, 11 drives raid-z3 for better redundancy, more performance, but only 72.73% usable space.

The trade between 2% more usable capacity and potentially bad performance in certain cases is not worth it. And you left no cold spares for rainy days. Maybe 6 cold spares is a bit overkill, as that’s what I do with business customers, but at least 2 cold spares should be kept. With 36 drives, either a 10 drive raid-z2 or 11 drive raid-z3 would be ok, but I assume the unused 2.5" bays should be put to use, not just sit empty (40 or 44 out of 48 drive bays)?

But I don’t like the idea of adding another z2 / z3 pool later and adding it to the stripe, because the pool will not be balanced with the last one. Most of the pool will remain static with the video library written to it, giving it no chance for the files to be rewritten to be rebalanced. Remember that these are movies, you don’t just modify bits and pieces of them, the pool is read-only for those files. They will never get moved to the new pool, without having to do a zfs-send to an entirely different pool, deleting the files, and zfs-receive back (or rsync, delete and rsync back).

Normally I wouldn’t be against that, especially for hypervisors, but this is mostly static media. It won’t work in this scenario. The stripped pool can be created now and later have the last pool stripped as well, but the movies need to be transferred there after the last stripe is completed.

Yeah, my plan is to build once I have all of the drives.

I definitely want it to be fully provisioned before I start transferring media.

One reason I am using so many drives is because of the size and cost of the drives. My goal was to be over 100TB of usable space. Also, I wanted to use the 3.84TB drives over the 7.68TB drives because they are almost half the price and are easier to acquire. I can buy at least two with every paycheck.

The other reason is a bit silly, but as I said before, I am who I am. Once Kahlin Sliger told me that they have the CX4708 case available with eight 5.25" bays available I knew that was what I was looking for and had to make the decision on how to fill those bays. I wanted to use Icy Dock drive bays so I could either go with four, six, or eight SSDs per bay. Four per bay would work, but six per bay would give me plenty of overhead. And, the silly part is that I didn’t want any empty space or bays. lol

Aesthetic shouldn’t be a deciding factor for any of this, but it is for me. Go ahead and tell me I’m being dumb. I can take it. lol




1 Like

Hi @mwilsonii I was wondering what program you use to rip your bluerays? I have a lot of bluerays and 4k bluerays I would like to rip.

You can rip DVDs and Blu-rays without transcoding, and you can choose what you want to include in the file.

It’s actually really cool…

sshot-s2-web

1 Like

Thanks for the information. I to like 1 to 1 copy of my movies. As soon as I get my enterprise NAS system from which ever dealer I do with I will rip my collection of movies. The reall concern have is with my 4k blueray collection, each movie can be about 25GB each and with a collection of about 250 movies and still growing, I only can fit 5 movies per each TB of disk space, hence the the large Nas size.

Tell me about it! That’s why I decided I needed a minimum of 100TB for my new server. I have been considering transcoding the TV show episodes. But, I will cross that bridge when I start running out of space.

Also, check out the forum at makemkv.com. They have a lot of good information about what drives work best and which firmware you need to be able to rip the discs and how to flash the drives. There are also a few people selling pre-flashed drives. It’s a fun rabbit hole to go down.

https://forum.makemkv.com/forum/viewforum.php?f=16

I should also note the specific drives and firmware are for decoding and ripping UHD media. Most drives work fine for normal Blu-ray and DVD media.

Some good information about ripping UHD discs w/ MakeMKV…
https://forum.makemkv.com/forum/viewtopic.php?f=12&t=16883

1 Like

Ha Ha I will do you one better.:joy: I figured I will need at a minium 320TB. I probably will use closer to 520 TB. I also have an even larger collection of PC games. I will be using Nas hard drives instead of SSD. Even useing hard drives, the hard drives a lone will come to somewhere between $4,480 and $7,820. At $14.00 per TB hard drives sure add up real fast. I have gotten a quote from 45 drives, It looks like a complete system is going to cost about $50,000 dollars. I havn’t gotten a quote from IXsystem (TrueNas) yet. I had an phone appointmet with the local rep, but it got reschedule. Hopefuly we will be able to get together some time next week.

I like the 45 drives guys. They make good products, too.
Edit: I don’t know them personally, I watch their YouTube videos. lol

You must have a much larger media collection than I do. lol

My next project is a NAS with hard drives. I’m going to use the same Sliger case and put Icy Dock 3.5" HDD cages in it with 22TB drives, or whatever the biggest enterprise drives available when I start buying components are. It will serve as an onsite backup of the server I’m building now. Then, the goal is to create an identical NAS in a more compact case to place offsite and sync the ZFS pools for offsite backup. But, that’s going to take a while since I’m working with a Tier 3 Engineer’s salary. lol

1 Like

I don’t think my math adds up…
5 movies * 25GB = 125GB.
1000GB / 25GB = 40 movies.
250 movies * 25GB (assuming they’re all max size) = 6250GB (6.25TB).

Am I missing something? I’m really confused right now…

If you get an odroid h3 with the NAS case and slap 2x 16 TB drives, you’ll be set for quite a while for that size. Or if you don’t mind the jank and want more than that, an m.2 to 4x sata card and a 3x 5.25" bay to 4x 3.5" caddy (I really, REALLY liked mine from Chieftec, the cmr-3141sas, all metal, built like a tank) and slap it underneath the h3 with the normal case.

That can serve as a streaming box alone, because of the gpu, but you can probably get something a bit beefier for more money and make it a multi-purpose hypervisor.

Bluray: 30-35GB
4k: ~80GB

That doesn’t include the specials/trailers/interviews. Just the movie files itself.

1 Like

Thanks @jode your the information. I probally over estimated my needs. I Need find out how big my movie, television, Pc games, and music collection’s really is I have all this information in a spreadsheet. I know it is housed in a large bedroom. It looks like a lot. I will have to sit down and reslly crunch the numbers. I don’t blame @ThatGuyB for being confused. All I can say I am a data horder.

2 Likes

Have you thought about power usage of 40 Disks :slight_smile: . Let’s say they use 5W constantly if the are HDD 3.5", times 40 disks, that’s 200 Watts. 200 Watts running 27/4 is 144 kW/h per month. In Kentucky Electricity is about 10 cents for example, so it will be $14 for electricity per month, not including the other fees. If the disks use 8 Watts each then it will be $23. Yes that’s still cheaper then renting equipment, but I’m just saying all of these things do add up and well $276 per year is not that bad, but that’s not all you’ll be running probably.