Has chia affected 2.5" spinning rust prices at all?

I’ve been watching drive prices post-chia fairly regularly because my server still has empty drive bays to be populated, and prices haven’t really been going up as much as I would have expected, but the thing that surprised me most about all this is, 2.5" drives are really, really cheap still. I don’t normally look at them as they’re usually lower performance, lower power on life, and lower cost per GB as a tradeoff for lower power draw, higher density at cost, and possibly better endurance for frequent spinup/down.
However, it seems like 2.5" drives are now better cost per GB than 3.5" drives, and seem pretty much perfect for the “spin up quickly, read some data, and shut down” style Chia drives are supposed to be used for, and the density you can get for 2.5" over 3.5" is somewhere around 4x, making a set of 4 4TB 2.5" drives a smaller physical footprint, and probably a comparable power footprint, when compared with a 16TB drive at a much lower cost.

Useless to me and my big pile of 3.5" drive bays and warm/cold storage use case, but it seems like they’re cheaper than ever, and a perfect fit for the people who seem to be buying up those high density NAS/Enterprise drives.

Am I missing something, or did I just not notice the constant deep discounts on 2.5" drives because of my size obsession?

The smaller 2.5 drives are may be significantly less reliable than the 3.5 desktop kind? Ive experienced far more data loss with laptop drives than any other desktop drives (granted this is anecdotal). I mean, if the smaller drives are actually cost effective for chia, they would be gobbled up as well.

Price history for Toshiba MG08ACA 16T 3.5"drive, vs Seagate Barracuda 5T:


3.5" historical minimum was around 18 EUR/TB.
2.5" was around 22 EUR/TB

Currently they’re both around 24 EUR/TB

You need approximately 3x the number of ports with 2.5" for same amount of storage.

I just picked up 9 WD black 1TB 2.5" drives for 68 ea for my truenas server. I know the Seagate Firecudas have jumped in price bigtime.

Wexx

Depending upon use case most consumer 2.5" drives are 5400 RPM with between 8-16 MB cache, WD Black and the former HGST Travelstar 7200 RPM drives were the last with 32 MB cache. For random I/O cache makes a huge difference.

From a reliability standpoint thermal wise 2.5" drives on the consumer side don’t handle heat very well and doesn’t matter if you shuck external portable drives or OEM/retail 2.5" drives the power management is very aggressive which can be chaos for some NAS/RAID setups.

Vibration/drops and heat is where laptops can kill a drive, some PC makers reserved HDD protection for their business grade systems. From personal experience you’d need to drop a Thinkpad or Dell Latitude from 10-15 feet while running a disk intensive program to produce enough shock to force a drive head to hit a platter hard enough to scrape it or de-align a drive if the HDD doesn’t have read/write head parking.

Be careful when buying 2.5" drives…e.g. Seagate Barracuda (which seems the obvious choice because of $/capacity) uses SMR. Raid rebuild time may skyrocket to more than a week depending on your setup. Google “SMR raid rebuild”…it’s insanity that hammers down on the very problem this technique has.

I was considering a 2.5" raid myself, but with non-SMR drives you’re stuck with max 2TB drives. Way too much drives (mechanical part prone to fail) for my taste to get a usable storage of 20-30TB. I already ordered good old 3.5" rust with CMR.

Icy Dock has very nice hardware for your standard case to convert 5.25" bays into very dense 2.5"/3.5" hotswap bays via backplane. Didn’t order yet, but products look promising and affordable.

If you dont care about raid rebuild times, get a truck of Barracudas and…MB924IP-B_ToughArmor Serie_2,5" HDD / SSD Wechselrahmen_Icy Dock - Festplattengehäuse und Festplattenwechselrahmen

But who would even want raid for chia farming? All it would do is increase wear on the drives by increasing the time spent spinning. There’s no real benefit to the added read/write performance of Raid5/6 here, silent data corruption isn’t enough of a problem for this either, since you can just replot a bad plot when it happens, assuming a plot even goes bad just because of a bit flip.
SMR is bad for RAID, but RAID isn’t exactly ideal for this, or for most home use either anymore, as larger capacity drives take so long to rebuild that you’re actually not that unlikely to get a chance write error while resilvering, even at 1 in 10^15 ure. It works out to about 0.8% chance for 8TB Ironwolf drives. That’s not even considering the added stress on the drives from additional otherwise-unnecessary uptime, and having more drives spinning at a time, increasing the wear rate during said uptime.

Raid5/6 is, afaik, recommended for drives under 4TB in size, or for business deployments that rely on uptime and can make use of the ability to resilver while keeping the raid pool live and active. For home backup, it may be best to ignore raid for larger data sets, and focus instead on having an off-site backup, and saving important data manually across multiple drives.

That said, I still wouldn’t value SMR drives too heavily for data integrity, if that’s what you’re going for. They’re good for WORM/read heavy workloads, like media libraries, manually redundant backups, or chia farming.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.