Spinning Rust, Whats Your Preference?

Would you rather six 2tb drives in raid-z2, or four 4tb drives in raid-z1? Not considering total volume.

The Hitachi (HGST) Ultrastar 7K3000 2tb drives come in two flavors: Standard HUA723020ALA640, and hardware based TCG Encryption HUA723020ALA641. I'm assuming the hardware encryption can be enabled/disabled by the user and would not impact performance?.?.

I'm thinking about using the 2tb drives in consideration of rebuilding in the unfortunate event. Otherwise I can buy two new 4tb drives and use them with my existing two 4tb drives.

Other input welcome too.

ive got 3 4TB drives in my zfs pool. 2 are "gohard drive" wl and 1 hgst nas drive. there's a rather long story about the WL drives, but not for this thread. my nas4free server is working well with this setup (other than the motherboard failing this morning).
rebuilding took a while but completed error free after a WL drive failed a month or so ago.

More smaller drives means that there's a larger chance of a drive dying, but then again the rebuild will be quicker so there's less chance of another drive failing during the rebuild.
For the latter reason alone, 4TB drives in RAIDZ1 is a dangerous game. If a drive fails and another one dies while the array is rebuilding, you're done for.

8x 4TB in RAIDZ2 here, but that's only for movie files etc. My more important data is on mirrored SSDs.

Z2. I don't trust single parity.

Just so you know, it's better to put important data on HDD's than SSD. I know they're mirrored, but the fact of the matter is that if you have to get it recovered, it's going to be something close to $2500 US to get it repaired.

I use Ceph and set 3 copies. Good, secure, reliable, tons of IOPS, can lose an entire rack without losing data. Happy array, happy day.

I completely disagree. It's better to put everything on solid state drives and have good backups.

Having good backups essentially negates any need to worry about the reliability of the drive, but I've had nothing but problems with reliability on SSD. If you're on an SSD, you need a checksumming filesystem. The drives constantly corrupt data if the machine is off for more than a few days. If I had a 2.5in drive in my laptop, I'd be using rust over SSD any day. Too bad it's only m.2.

Of course this is just my experience. I'm sure there's someone out there who's had nothing but failures with rust and NAND hasn't had a single problem. That's just how luck works out sometimes.

True, data recovery from an SSD is more expensive. Then again it's not cheap for HDDs either. Seeing as I do regular backups, I wouldn't spend money on data recovery anyway. At that point it becomes a matter of deciding which drive type you prefer, which in my case is SSDs.

I'm one of those people who had plenty of problems with HDDs (mostly the 3TB Seagates, TBH). Had some issues with the early SSDs too (Vertex2 and a RevoDrive x2 died on me back in 2011), but the later stuff seems pretty bulletproof to me.
I have 13 SSDs in use right now, the oldest since the summer of 2011. The biggest problem I had with any of them was a bunch of CRC errors on one of my 850PROs ... caused by a bad SATA cable.

The two SSDs I have in the NAS right now were never intended to be a long-term configuration really. I blew past my budget when I built the NAS, but the idea was to have six 250GB Crucial MX200s in RAIDZ2 in there rather than 2 mirrored ones. I just never got round to buying the other four. I'm gonna have to though, I'm starting to run out of space.

We have about 20 SSDs that are high r/w and/or caching engines on our SAN that have never failed and they get the most r/w out of any of the drives in the SAN. We replace HDDs on a monthly basis it seems like.

Modern SSDs are nearly bullet proof.

Edit: well now that I think about it we did have three fail simultaneously 3 months ago but we're under the impression that was due to the EMC uptime bug. EMC sent out a tech that resolved the problem for us and replaced the drive.

We have 8.2PB raw of 2tb hdd's in the ceph cluster. Running for 4 years now. Enterprise drives. Lost 18 total. That's 18 drives of 4100 drives.

On our SSD cluster, it's significantly smaller, we use it for current active data. 900TB raw. 2tb drives as well. Makes 450 drives. Ceph is constantly repairing bad checksums on these drives, which is why I'm tempted to crank it up to 4 copies. (Intel P3700) This is costing us about 2500 iops on the cluster. I know we've had some fail, I just can't remember how many and don't have the data in front of me. If you're interested, I can get that data, I just have to talk to my datacenter tech.

I'm looking at about $600 for HDD, $2500 for SSD. That's drivesavers. Which is why I agree with you on backups. I've been starting to think that RAIDZ2 is better than backups though. I'll write an article to explain why soon. For now, let's just say that it winds up costing less and short of a fire, meteor, theft etc... it's just as good.

Ironically, the Vertex2 in my desktop is the only SSD I've never had problems with. 160GB ssd bought back around 2009. Damn thing is bulletproof. Still houses my / and /boot partitions. (really wish btrfs would mature)

At home, i'm still on 1GbE, so I don't see any benefit to using SSD, especially with a 1230v2 and ZFS. Transparent zstd is coming and that's going to allow for more efficient compression of data at speed. I'm able to saturate 1GbE with gzip-6 and Plex streams running.

Out of curiosity, what extra benefit do you get out of the SSD on your NAS?

Those are the PCI SSD's? What generation is those? Maybe it's just a failure rate of PCI ones or something, who knows. Naturally our SAN is for virtualization so maybe the workload is different and r/w are different too.... also our SSDs are the SAS type.

That's the thing about drives though, everyone gets their own experience and ends up having a bad taste in their mouth. I won't use western digital drives for the life of me, but swear by HGST. What kind of sense does that make?

SATA or SAS, I forget.

We do virtualization on our SAN, but we also store cold data on rust.

Yep, I'm definitely bitten by experience. Funny, I'm in love with both WD and HGST drives, but Seagate are the bane of my existence.

Well, HGST used to be a different company, right? If I remember correctly it used to be a merger of IBM and Hitachi, so I guess that makes sense.

@SgtAwesomesauce, we are putting in SSDs in our NetApps, now I'm not so excited haha.

HGST is a subsiary of WD now. I won't buy Seagate drivers either, haha.

You'd be afraid of our new project. We are in discussion to go VDI. If we do it'll be 100% flash storage. Something like 900tb of ssds. It's gonna be wild.

Thoughts on Samsung drives?

I am afraid of that. It will be fast as hell, but without checksumming and 3 copies, you're asking for trouble.

I'd imagine the guys from EMC2 have all of that covered. I'm just the security guy so I don't have any eyeballs on the project, i just did the initial investigation on feasibility and ROI. There is no ROI in VDI btw, fun fact, it's pretty much the same cost and buying thick clients, that is until you count administrative costs into it, which I didn't, since VDI is much easier to admin. It was 90TB btw, not 900. Jesus that's a ton of SSD space.

Does samsung still make drives? My first computer build in 2013 had a Samsung Spinpoint in it and that joker is still going strong today, so I guess I like them. It's the only samsung drive I've knowingly used.

I've always had a feeling that was the case. The way I see it, VDI isn't worth it for the bugginess (and latency) of the whole remote desktop aspect of it. Would drive me nuts. I use RDP to connect to my windows VM on my local machine and the sluggishness drives me up the wall sometimes even though I've given it 4 threads and 8GB ram to handle Skype and Outlook.

Eh, I don't scoff at numbers approaching PB scale anymore. When I first walked into the DC at my job and saw boxes of spare drives and multiple rows of racks dedicated to storage, I nearly had a heart attack, but now I'm used to it. 90TB is still substantial though.

No idea. There's been some consolidation of the HDD market over the years, but I'm not following it that closely.

Well VDI works much better than RDP, at least the VMWare VDI technology does. RDP is a trash protocol for desktop replacement, but VMWare VDI uses proprietary stuff, I think they're still using PCoIP, which actually works really well. My old job was a large medical clinic that had VDI access to the local hospitals VDI network and with the Horizons session you could actually forget you were working over an internet access at times.

Ultimately I don't think we're going to go with it. The initial idea was a nice chunk of ROI but I just don't think it's going to happen at this point. I'm not too bummed about it, but then again it's cool to have such granular control over all of the endpoints from a central location.

Glad to hear it. It would be a complete mess if not for that.

The control would be nice though. I suppose there are other solutions for endpoint control that you could implement.

Well that's the problem really. I have pretty good control over all of our endpoints, but it takes -alot- more work than in VDI. I have software solutions for:

Patching
Logging
Log forwarding
Behavior based protection
Software inventory
Health inventory
Forensics
Access auditing

Now some of those are packaged together, but the point is I have a ton of software (and I'm missing some off that list), and I'm just the security guy. In VDI I could cut almost all of that out. Windows updates? You just deploy the new golden image instead of pushing updates (software patching) or pulling them (WSUS patching), for instance.