Partition drive for ZFS special VDEV?

It’s almost time to upgrade my main workstation and I’ve got some ideas in my head. Due to moneys (or lack thereof) I won’t be implementing this build anytime soon, but I still have some questions.

I would like I’m to have the root pool on one or two gen 4 m.2 SSDs, and my home pool on 4 HDDs in a striped mirror. I would also like to use this fancy special VDEV thing but I don’t want to use a whole drive just for that.

So how bad would it be if I partitioned a chunk of the m.2s off and used that as the special metadata VDEV for the home pool?

I’ve heard it’s bad practice to not use the entire drives, but I can’t remember if that was just for L2ARC/ZIL or not. Also, idk how it relates to SSDs.

Any musings on the matter would be appreciated.

See if your drive supports NVME namespaces, which to my rough understand acts like another drive. Much cleaner than classic partitions, and home use of special vdevs is not going to be very demanding, so even consumer drives shouldn’t be any worse than normal, assuming they support namespaces.

To be honest, for special vdevs I would never go lower than triple mirrors. If pressed, as long as I had constant backups then 2 would be… ok. If the special vdev goes down, so does your data.

1 Like

I definitely need to read up on these namespaces. Never heard of that before. Thank you for the links!

But that leads me to another question. Are mirrors on ssds worth it? To my understanding HDDs die with age but SSDs die with use. And if I’m writing to both drives at the same time, then shouldn’t they die around the same time? I guess at the very least, after one drive dies you can limp your system along until you replace both drives, but I would feel like I need to replace both drives at that point

Correct, that is absolutely a valid data safety concern which I totally neglected to mention. I’ve seen reports by people who work in enterprise that manage many enterprise SSDs that it happens for real, to the point they need to deliberately stagger drive usage and item lots/dates. Drives that were made at the same time are more likely to share some defect that simply up and kills them before the flash goes bad. Same with home users Home users, their usage is so light and the SLC/MLC/TLC flash endurance so underestimated that were more likely to see the drive controller simply up and die than the flash wear out. Not sure if QLC holds up to that.

I personally have 4 used old Samsung “absolutely no fucking marketing or documentation” enterprise m.2 drives. Not only do they have different amounts of writes, but two are slightly different versions than the other two.

If I was using new drives, I’d get 4, and slowly cycle usage/time with 3 in use, and one as a spare.

Special vdevs for metadata are not going have many writes. At least not until you use the other functionality of funneling small blocks of manually selected size onto them. Then certain situations could result in decent wear.

1 Like