I am in dire need of the hive minds knowledge! Until now I ran a RAID6 with 6x 4TB disks and I am unhappy from a data security standpoint. I know a RAID is not a backup so I am looking to improve! I am also looking to use the chance to switch to ZFS to get some additional data integrity protection for my data.
Until just a few minutes ago I was settled into buying 3x Seagate Exos X X16 16TB for 370EUR a piece. Thats 23.125EUR per TB. I would have used 2 for a mirrored vdev in my computer, and one for a non-mirrored vdev in a seperate PC as backup. These are very nice drives with 24/7 certification and 5 years warrenty. I though this should be sufficient in terms of personal data protection.
However the price got me to look around a bit and it turns out that external driver are available for cheaper than single internal drives. The cheapest here is a Western Digital WD My Book 12TB for 180EUR or 15EUR/TB. This is quite a difference! I checked and the drive is labeled as using SMR mode so I investigated a bit.
Many people say not to use SMR drives with ZFS since rebuild takes “soooo long” but all of these people have been using RAID-Z of some sort. This means that all disk will have to be read and small chunks will have to be written to the disc that is being rebuild.
I however would put them in a mirrored vdev which, if I am not understanding ZFS wrong, could be rebuild using one long sequential read from disk one onto the disk that is going to be rebuild.
So my question is what is your stance on using cheap SMR drives, or cheap drives in general, in a mirrored vdev or in general in this setup? Would it maybe make sence to save some money and buy cheap SMR drives for the non-mirrored backup pool since I would only ever be slowly filled via LAN and mostly read anyways, or should I go full SMR because ZFS takes care of my data or should I rather stick with the more expensive but higher quality Seagate Exos drives?
I am happy for any kind of input and I wanted to mention that you may can deduce from my writing that until know I only have superficial knowledge of ZFS.
Thanks in advance everyone!
Edit:
I want to update the question: Should I either choose the expensive 24/7 rated Enterprise CMR drives or the cheaper commercial CMR drives for my setup or even mix them somehow.
Reminder: Setup is one PC with mirrored vdevs and second PC with JBOD. as backup
I wouldn’t trust SMR in any type of RAID. It’s not that the drives are slow, it’s that they outright lock up when managing the shingled data structure, which looks like a dead drive to the system. This is a problem for any kind of RAID setup.
It could also likely lead to consecutive hangs when writing data, with one drive needing to manage it’s shingling and making all the other drives wait, only to have to wait on the next shingled drive to do it’s thing shortly after.
Another thing to consider is that the larger the drive is, the more likely you are to get an unrecoverable read error when rebuilding an array. It not only takes longer to do, which is hard on the drives in the array and can cause secondary failures, but it’s also more chances for the drive it’s self to screw up. I see a lot of people recommend <4TB for Raid56, maybe 8TB if you really want to push it.
This is all second hand information, and not from experience, but it’s just what I’ve noticed when looking into this myself; I was planning to go 4x8TB Raid5 for recoverability, but looking around, it sounds like it might be less reliable than just using individual drives and managing manual backups and parity checks.
Like RAID, ZFS is not a backup solution. It’s just a file system (as the name implies), be it a very sophisticated, feature-rich, complicated one.
IMO you’re better off with cheaper, smaller capacity drives to give you redundancy within the RAID, like the WD Red or Seagate IronWulf series. An 8TB drive would set you back about 220-ish euro, with 3 in a RAID 5. Do note that you shouldn’t use disks from a single manufacturer in a RAID, especially if you buy more of them. Use different brand disks in a RAID to spread chances of getting a dud batch drive that’ll clunk out prematurely.
Good point, I will see if I find any resources on that issue.
This is specifically why I talk about using a mirrored vdev since the whole strain put on the remaining drive is a single consecutive read.
Yeah but I neither want to use Raid5 nor Raid6 nor Raid-Z. Also if I am using small discs like that I can’t fit the number I would need into my PC.
Yeah but having a second non-mirrored pool in a different PC, like I described, is most definitely called a backup solution.
Sorry but this does not sound like good advice to me. This would cost me 660-ish EUR, I would have an unsafe RAID5 (not even RAID6, what about read errors during recovery?) and not even a real backup. Even my current setup is more safe than that.
ZFS protects your data against bitrot, so even in mirrors, your data is safe. As for read errors during resilvering, ZFS will just ignore it and keep going. You may lose some data or have it corrupted, but unlike traditional RAID where a read error will leave the whole array unusable, ZFS will not kill your whole zpool just because of a read error.
3 years ago I built a small workgroup server using xcp-ng on a ZFS mirror 2 * Seagate 8TB Baracuda ST8000DM004
Performance was intermittently dreadful and I could find no fault, so i replace the storage with RAIDZ 3 * Seagate 8TB Ironwolf ST8000VN0004
The result was instantly better at the time I thought those 5400 drives could not keep up in a ZFS mirror, it was not until later the whole SMR vs CMR thing blew up
Thanks you for this piece of information, I was unsure how ZFS would handle such a case. I however was unhappy with Dutch_Masters suggestion because he said nothing about RAIDZ1, he said RAID5! RAID5 in opposition to a ZFS RAIDZ1 will most likely fail when a read error occurs during rebuild.
I would consider going the RAIDZ1 route at a later point because I don’t have that much money to spend on discs, but currently my PC is completely filled with the 4TB discs and I definitely need more space. Since I do not want to upgrade every other year but only expand I want to take the real big discs from 12TB upwards so when I have a 12TB or 16TB mirrored ( 2 disc in PC) I can add two additional 2-way mirrors at a later point.
Thank you for sharing your experience. I had a 12TB external drive with SMR recently and I filled it twice completely, perfomance wise this operation took long but sequential write was about 100MB/s. I could really not tell how they will behave in a mirror. My reasoning was that a vdev mirror or a RAID-1 is basically the configuration which should behave the sanest with SMR but I could not tell if it is still a disaster or if it is acceptable.
I think I made a mistake there. The drive that was labeled as SMR seems to be the Seagate Expansion series of external drives. For the Western Digital WD My Book series I am unable to find any mention of either recording mode. Do you have any reliable source that the Western Digital WD My Book 12TB is a CMR drive. It is still one of the cheapest drives around and it having CMR would be even better.
Sure, wrong nomenclature, but given that there is a title and a label with ZFS, I think it’s safe to assume he meant RAID-z1.
If you intend to upgrade your pool at a later date, then I can only recommend mirrored vdevs. Lose some capacity, but not have to worry about drives dying during resilvering. There are also other things to consider when going with ZFS and any parity RAID, so the easiest is to just go with ZFS RAID10.
I will point you to my post, where I quoted Sarge’s ZFS optimal performance stats for parity RAIDs: ZFS RAID Config for old disks
You can skip most of the ramble and go straight for the quote in the OP. But again, since you said you wanted to upgrade in the future, the easiest is just adding more vdev mirrors.
Fair enough, I think I might have read to much ZFS evangelist sites while looking into this. Most of these people insisted on using the ZFS terms exclusively.
The following link was also why I settled on mirrored vdevs: https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/
I just think 2-way-mirrored vdevs are reasonable safe on their own, even with multiple vdevs in a large pool. Since I am also planning to have a seperate PC with RAID0 - JBOD - non-mirrored pool as a backup I think I am planing as safe as possible given my personal finances. A disaster would only occur if I am loosing a disc, then the partner disc during the resilver and then also at the same time my backup. There would have to be a cascade of 3 failures during this relatively short amount of time where I encounter the degradation.
I want to update the question: Should I either choose the expensive 24/7 rated Enterprise CMR drives or the cheaper commercial CMR drives for my setup or even mix them somehow.
Reminder: Setup is one PC with mirrored vdevs and second PC with JBOD. as backup
If you mean something like Exos vs IronWolf, I’d go with the cheaper IronWolf. For home users, some of the enterprise features (like n number of TB written on disk) don’t make much sense. Heck, even for some businesses, Exos doesn’t make much sense in some situations.
I would definitely like to hear some more opinions on this though, since I am really biased from only working on a medium-sized business with limited IT budget.
I just checked and the cheapest IronWolf that I can buy as an internal drive is the 8TB NAS version, which is 24,372/TB. The 16TB EXOS was 22,438/TB so this makes no sense for me.
Since I want large drives the cheapest internal drive with at least 12TB is an “Toshiba Enterprise Capacity MG07ACA 12TB” which is 18,817/TB an is also rated 24/7 capable.
The problem I see with buying external drives and using the disc internally without the case is that most likely I will loose all warrenty .
Additional question: When creating a 2-way-mirrored vdev should I buy two of the same HDDs or two different ones?
I have a reasoning for that bit, but it is a long ramble
All drives die. Even flash.
Typically drive deaths are measured on the huge scale of a single model range/line.
Often drives will follow a bathtub or bell curve, where a bunch of drives across a model die early, then the rest go on a long time, before they start failing at an increasing speed.
This might be detected by an initial burn-in, so if your particular drive was to be one that would die early, it might be detected in the burn-in.
Some models of drives are just particularly bad, with a lower, or shorter hump than other drives, where there is more failures all the time, not nastily at the beginning and the end.
Mixing drive batches/models is to avoid a problem with a batch; if a batch has a maufacturing defect, chances are a bunch of drives in the same batch are affected. If one buys a few drives from a vendor at once, chances are good that they will be from a similar batch, so an issue affecting on drive has a higher chance of affecting the other.
The rational of buying from another vendor/ mixing models/brands when buying several, is more to avoid bad batches, rather than anything else.
But, you might go the other way, like BackBlaze, who see the benefits in the way they use drives to order a load of cheaper drives, because even having to replace a whole batch, can save them money.
As we don’t have the need to order by the thousand, it’s simpler just to mix it up.
I mention BackBlaze, because they publish the figures of their drives.
They consistently show that HGST, and enterprise WD drives seem to perform better in their machines, and yet they order Seagate drives by the Literal boatload.
They find the 150%price increase does not justify the 130% increased lengevity, on the whole, compared to 90% performance at 75% price of cheaper Seagate *
*Not actual figures, drama for effect.
It sounds like you would rather less headache, Neuromancer, so maybe a drive with a nice warranty might be worth the higher price. I’m happy to keep swapping out dead rust as quick as I kill it, because I’m not so worried about the warranty.
Honestly, No. Not without personally going out and buying one. They change the drives used in these externals sometimes and the last info I was able to find was that they were using an enterprise drive in there back in 2019.
I do have an idea though. You said you wanted to have what essentially amounts to a backup drive in a different system. Why not purchase a single drive, shuck it, find out what your dealing with and if your happy purchase 2 more and make your ZFS mirror. and Keep the first one as your way to backup your pool.
Doesn’t really matter as long as they have comparable parameters. So 1GB smaller/bigger doesn’t matter, ZFS accounts for that. ±5% of each other in speed/iops, sure. But don’t pair 15k SAS with 7.2k Nearline, i hope this is obvious.
Having the same models in array just make life bit easier, f.e in terms of SMART attributes. And if you really worry about “batch thing” Trooper_ish said, then buy half of the disks from different vendors.
I buy batches of say 2x8, make 2 arrays, and I test them anywhere between week to a month, depending what production needs are at the moment.
Then, If I can, i move some old backup to new array, and use old array for backup of new production array. So even if arrays are from same batches, data is mixed between different ones.
Currently Toshiba X300 HDWE160UZSVA/EZSTA, has been solid for me for few years already (sample of around 50, one or two DOA afair). And before that WD Red was my go-to, (up to 4tb, also few dozens).
Imo buying 2x more consumer disks and doing backup by default is better than any enterprise models without backup for the same price.
But if money is not an issue, then go ahead with enterprise.
You cannot account for every scenario possible, you can just lower the odds. And its usually scenario, that you haven’t accounted for, that gets you anyway
Also, you may check out another topic that Biky started recently.
At my company, we just buy one batch and 1 or 2 spares, but we have backups for everything. We get the cheapest NAS stuff we can afford. Like misiektw said, one solution for home users would be to buy 3 disks: make a mirror for main storage and use 1 disk alone as backup media. One even more brilliant solution for home users would be buying just one disk at the beginning and testing it to see its resistance makes rational sense, especially if you only use it for backup purposes (ie: if your backup server goes poof, you still have your main data, so you don’t have to worry about data loss). And in the meantime, you may save some money (1 month) to account for any minor price changes.
Yes it does. I think I am not going to mix models/manufacturers for the mirrored vdev so that the drives behave consistently, but I will make sure to but a different disc into the backup server.
I also think I will be going the high road here and buy the enterprise drives with long warranty. I think paying a few euros extra to have the peace of mind to use high quality drives and be able to use them for that amount of time or get a replacement.
Thank you for your input, but instead of having no warranty after pulling the drives out of their enclosures I will buy the more expensive regular drives for internal use. While saving some money now makes my wallet smile, I might run into problems trying to return an external drive that has been opened down the road. I am neither looking forward for the headaches nor the uncertainty that will cause me.
The 16TB drives that are available here all seem to be enterprise grade. But as I said this time I am going to have a proper backup. Thanks for all your info, I took that into account for my decision.
I decided on using 3 16TB enterprise drives. Two of the same model for the mirrored vdev and one from a second manufacturer for the backup.
The reason is I want to be able to compare and automate the reporting on the two drives health. I also like them to have similar specs to begin with.
The drives I am currently looking into is the 2x Toshiba Enterprise Capacity MG08ACA 16TB for the 2-way-mirror since it has 512MB cache as well as a persistant write cache, which I think is a nice feature. I would then stay with the 1x Seagate Exos X X16 16TB for the backup server.
Only remaining question I have is the Toshiba is labeled as having “4KB with emulation (512e)” and the Seagate as “4KB with emulation (512e)/​4KB native (4Kn)”. I do now that this is about the sector size, but I do not know if this is in any way relevant and if I would need 4KB native? Would be great if you guys could help me out there one last time!
This is only relevant if you want to make your own partitions (you have to align them to 4k)
But if you giving zfs whole disks then only thing you have to remember is make zpool with ashift=12 option.