Yup, for sure, and this is why as standard process everyone should have backups
Thing with hard drives is that sure you can get 8x the capacity for the same price.
But most people simply do not need 8x the capacity any more.
For a typical modern cloud-using consumer (i.e. .streams media, uploads photos to cloud, etc.)
- Why have an extra 700% capacity available that you don’t need for use when it is so slow?
Because cloud options can get screwed up. I’ve seen it with my own eyes. Several of my relatives have lost precious moments in photos and such because of a cloud screwup. And yes, I know they should have had a separate backup. That was my mistake for not suggesting that. But it’s good to have your data in multiple places. Even physical ones. Another reason i’m a proponent for physical HDDs.
So does local storage.
I know far more people who have lost local storage than cloud.
As always, if something is important, have multiple copies of it. Whether that is local + cloud, or cloud+cloud or local + remote disk.
Monitors and monetizes every bit of data you create but has nothing when you go to retrieve it.
You have a better chance of recovering data from a Hard Drive than a Solid State Drive should either fail. Of course I never had an SSD fail on me yet but Hard Drives seem to be very much reliable for me too. The only ones that failed on me were portable External HDD probably because maybe the hard drive vibrated too much like the one time I was using it in a car or dropping my backpack, though now I have a protective case for my handy external HDD. I should build a NAS for backup though but the transfer rates will just make each back up session take many hours.
DCs are still deploying spinning drives, but the storage stack is no longer local storage based for the most part. HDDs are still here mostly because of density. … you just can’t cram 200TB of flash into a decent 2 socket 2x40Gbps 4U machine
2x 3.5" drive and you are covered
Nice. 100TB flash in 3.5in enclosure. I don’t even think SATA is a problem… as long as it can saturate it. I wonder how much they cost per unit and how much the company can make.
As I mentioned previously there was once a time. I suppose for those who don’t mind forking out 3 or 4 hundred dollars a terabyte mechanical drives of this sort might be worth it but lately for me, especially in RAID arrays, newer 1 TB klunk drives simply don’t last long.
Thank you everyone for your input. I greatly appreciate it.
I’m sure a few of us here have our own private magnet collection.
The strong point with those discs is access to data – even in the event of mechanical failure in some cases. One can always purchase a replacement controller. I’ve never maxed out an SSD on storage to the extent that it couldn’t function but I have bricked a couple in my day. Both had the same thing in common: They were the first generation SATA SSDs and they both used the older SandForce drivers. I believe that was before LSI bought out SandForce. One of the SSDs was an old OCZ. To be honest I’m having a much better LIFE CYCLE experience with SSDs and nearly zero S.M.A.R.T. issues with them. Ultimately we both know the answer to hanging on to data is backup, backup, backup. I’m beginning to think backing up to SSDs is a more prudent option. Even wiser, having cold storage on hand would be ideal.
Two years ago I would have agreed with you hands down on your view. After encountering numerous klunk drive failures – mostly with brand new mechanical drives both Western Digital and Seagate dives – I cannot share the view that these are more economical. It isn’t economical to have to replace controllers. It isn’t economical to spend hours trying to recover data on the account of failed mechanical “klunk” drives. Ther
e is precious little about being economical when a hard drive fails. Remember you heard it first here… They are rightly deemed “Klunk” drives and they will ever be klunk drives. I have dubbed them what they are. They go Klunk in the middle of the night and I should know that even a few of the old faithful Seagates I own are dear old klunkers. I have an 18 year old PATA Klunker that refuses to die. It’s an Samsung 80 Gb drive. I have it in cold storage now. Perhaps i should fill it up with pictures. What a novel idea! They just don’t make 'em like they used to.
I have an old PATA 80 GB SAMSUNG drive that still works. It’s 18 years old.
I refuse to repair any laptop for anyone unless they agree to use an SSD for their os. IMO a mechanical drive in a laptop is only a recipe for disaster.
Well that’s true, I am sure most of us have at least a couple of failed HDD but HDDs tend to be pretty reliable to me compared to a stupid Flash Drive.
Some flash drives are stupider than others. My Kingston Sata SSDs are five years old and still running fast and strong. I have at least a dozen and it sure would be nice if my newly purchased mechanical hard drives (number at least as comparable) failed as frequently as my Kingston SSDs don’t.
As for M.2 I can’t rightly say. I purchased an EVO STICK a cpl years ago only to be told by Samsung that it was not compatible with my Asus X99 main board. The drivers installed but the BIOS refused to see the on board M.2 stick. Later I discovered that an NVME RAID would be impractical for me because of the ports that I would be required to sacrifice to make it work so I never got around to it. (Not that I really need it.)
So far the only practical use I can relate to for using mechanical drives with my current system is in NAS applications using the heavy duty (and slow) NAS drives. The more costly Enterprise drives are not really economically viable as one drive is easily half the price of a decent main board. Two of them would buy my main board easily. I’m using cheap Western Digital SATA SSDs in RAID 10 on an LSI card and they have not failed once (this is year three). For RAID 10 the speed isn’t so shabby all things considered. For gaming I have two Hyper X SSDs in RAID 0 going on strong. (Next year will be year three). RAID 1 presents different problems. I have tried to do this with KLUNK drives over a dozen times and they are practically always a guaranteed failure. I am considering an additional RAID card as onboard Intel “RAID” is proving to be consistently unreliable.
11 year old daughter has Seagate BarraCuda 1TB drive in old Dell 435T Studio XPS tower (previously owned by me). The thing sounds like a coffee percolator in bad need of a cleaning but it refuses to die. Not like she doesn’t put it through its paces either. She games, she renders, and does animation. I’m not seeing that sort of durability in mechanical hard drives anymore.
My only problems with RAID have been due to using client HDDs in redundant RAID:
HDD has read error
⇒ client drive will keep trying to read if for extended period of time
⇒ HDD doesn’t respond to RAID controller
⇒ RAID controller thinks HDD died and drops it from the array
Whereas if you use a RAID/NAS optimized drive:
HDD has read error
⇒ NAS drive will give up early, trusting there is a redundancy system above it
⇒ RAID will recover the data from another location or from parity
⇒ RAID chugs along