Hi all! Here for some configuration advice.
I bought a Netapp DS4246, IOM6 modules, with rails, a cable, and an LSI card because I always thought they were cool.
After purchase I thought I needed 24 Hard drives to test it with.
So i bought a lot of 25 used 8tb SATA drives on eBay to do just that.
After I received them and got them working, they all pass a long SMART test.
That’s kinda where the good news ends…
All the drives have over FOUR YEARS of power on time.
Most have ~2 years of flying head time.
The current plan is to use this thing for a steam library, media library and linux ISO’S.
It MAY be used for web content storage in the future if I get a wild hair to try web dev.
I’ve only set up 1 storage array, so I’m very new to this.
My thought for the setup is to have 1 large pool made from 3 RAIDZ2’s of 7 disks each with 3 hot spares and 1 cold.
Also is there a way to turn down the idle fan speed and power consumption on this disk shelf?
Any feedback other than throw it away is greatly appreciated!
Thanks!
Three hot spares feels like a lot for a steam cache and media server.
But I’ve never used that much storage before so maybe I’m off.
You got a great deal! Mine had over 6 years of power on time
Those Micron 9300s you mentioned in another thread? Seen a lot of them sold atm…vast stuff getting replaced in the enterprise. I’m tempted to pick 1-2 of them up…4-8TB new drives are 140€/TB atm, that’s just insanity and why I picked up only 2TB SKUs so far.
But 4 years of power on time certainly isn’t the same ballpark as HDD having the same. Lack of mechanical stress and just electronics, probably ran under proper thermal conditions…
I had some old 10 year SSDs and the collection certainly increased over the years but I’ve yet to see an SSD that just fails for reasons other than worn out flash cells.
I think SSDs are almost as durable as a NIC or other dumb/passive PCIe card, or RAM or board. No-moving-part stuff just doesn’t go poof in my experience.
half the PBW on the drive sounds a lot more “used” than 4 years powered on to me.
Having bought some stuff from them lately, I can recommend IcyDock. They have a 8xSATA SSD bay in 1x5.25". So three of them and some cheap case and optional fan replacements for reduced noise…that’s what I would go for…compact and quiet.
I have IcyDock Bay with 4xU.3 in 1x5.25" slot and they produce way more heat than 8x SATA ever will.
I agree with the others. spares are pool-wide for a reason. You certainly don’t need a spare for each vdev. I would go for 1 hot spares and 2 drives cold and ready to replace the spare (spare in ZFS is just a temporary solution until a new cold drive joins the vdev and the SPARE reverts back to waiting position in the pool).
I recommend doing a test run to get a feel how ZFS handles this and how you replace a drive. If something happens, you know what to do and it will be easy. And also to check how long a resilver takes. 8TB of recovery in non-sequential resilver (unlike mirrors) can take a while, even with SSDs. That’s why you see 4TB NVMe still in use for enterprise storage even though 8-16TB were available…faster recovery.
With Z2 and a spare…that’s 3 dead drives worth of time to react and replace stuff. Still sounds more like a co-location situation where you don’t have physical access quickly.
I agree for enterprise SSDs. IMHO there is an amazing amount of junk sold to consumers nowadays chasing the lowest price point.
A single (used) 4TB or 8TB Gen3 nvme (e.g. Micron 9300) very competently replaces 4-8 1TB SATA/SAS SSDs in terms of performance, price and power consumption.
Personally, I am chasing gen4 SSD deals - they’re harder to come by and a little pricey.
I don’t mind Gen3 drives tbh. I’m more interested in specs like latency, IOPS and power, which is usually higher on Gen4 because newer kind of NAND technology or NVMe 2.0 spec, not because it has PCIe 4.0). You can see this on Micron 7400,7450,7500 and now the new 7550…all gen4, but specs changed quite a bit over the years. While 9000 series always gets a bit more love compared to 7000 series
The price floor ever since “the SSD cartel” announced production cuts last year…is really bad. Even ebay prices have gone up as buying used became more attractive. So this bump compared to 2023 (where we payed 85€/TB on new 4-8TB drives) affects everyone. I wish I had bought a handful of 4-8TB drives back then.
And getting cheap ass SATA/SAS SSDs…with cheap HBAs and ubiquitous and way cheaper backplanes…reasonable option for a lot of people. But buying new drives just doesn’t make sense anymore and why I think SATA/SAS SSD is legacy hardware and on life support at this point.
And in the case that you need multiple drives, 128 namespaces got you covered for most stuff.
Sure.
However, in my testing I don’t find benefits of namespaces over using more flexible partitions. And when using zfs even those are not used.
Maybe virtualization? I guess I don’t have these use cases.
You can set separate overprovisioning per namespace.
Means you can make a 100G namespace with super-overprovisioning for a LOG and remain with standard/factory overprovisioning for other things without committing the entire disk to way less capacity than needed while increasing PBW by quite a bit.
I will probably work with namespaces for DB/WAL disks in Ceph, which are the equivalent of SPECIAL and LOG in ZFS.
Same with stuff like disk-encryption, but that’s probably not that relevant for us homelabbers.
And although I didn’t test namespaces with ZFS, you may get higher disk utilization if ZFS hammers the disk from multiple namespace angles. Might be the same as partitions though.
Niche indeed, but there is some special stuff compared to partitions.
Thanks for the input!
I’ll run with the one hot spare 3 cold setup then, i didn’t realize how the rebuild process worked on ZFS and keeping the spares cold to reduce power on time makes sense.
The Icydock is cool, but all my Hard drives are 3.5" Spinning rust, so it’s not all that applicable…
since it came up, I do plan to set up a separate SSD pool for current projects, netboot images, and other more “speed critical” applications. My current config would be 5 4tb sata SSDs in RAIDZ1 backed up to the JBOD.
also as a “Top Tier” of high speed storage my controller/VM server has 192GB of ram, I’m thinking of allocating ~64GB to a ram disk pool mirrored to the SSD pool for ultra speed and security (encrypted container decrypted onto the ram disk then re-encrypted when finished.
that might be completely unnecessary though…