The Normie Dilema, RAID or not to RAID? That is the question

Hello everyone, good day.

I will like to get feedback from the community for having a more reliable NAS experience. My HDD failure rate is high in comparison of other net users. Most of my drives are NAS drives (red label). I run freenas with no raid configuration, just format the drives for ZFS.

Warranty is not an option either, I live in a country different from the US. So making valid a warranty is really complicated. Also money saving is an issue, it is cheaper for me to buy a new HDD than to sent it abroad.

So, I was thinking on how to improve my setup:

  1. Use software RAID 1 with 2 Discs.
  2. Make a copy of the HDD independent to RAID as backup and leave it in the NAS.

I ask this of the community because I have seen contradictory comments regarding the use of RAID. RAID is good and not good at the same time.

I am just a home normie looking for more reliability, ease of use and money savings. :no_good_woman:

Practices:
For preventing my electric bill to be higher, everyday I turn on and off my NAS. Can this create an issue with the drives?

Hardware:
I noticed that the HDD failure is higher when using my SAS Card instead of a normal SATA. Do you believe this has anything to do with my problem?

Background:

Recently, I noticed that I have a high HDD failure rate. A 8TB IronWolf Drive recently died, or at list that is what I believe. The drive is not recognize by the FreeNAS PC I built. I tried plugging the drive on another NAS and it is not recognize either. This is not the first time it happens, some years ago the same. Fortunately, I have disc backups of the data but it is tiresome to put everything back together.

I am a normie that likes to watch its movies. The sole purpose of the NAS is to watch movies. Speed is not a factor for me, just want more reliability. I understand there is not a 100% way to prevent failures.

For a media library I recommend snapraid, it can add redundancy to existing drives without needing to change the format and you can add or remove drives whenever you like. Also if you lose more disks than you have parity for you only lose the data on those disks and not the whole array.

Traditional raid is pretty much useless in a home environment, it’s only use is to maximise availability which usually isn’t very important to a home user. Modern raid methods like zfs are better as they don’t just give you availability but data integrity as well (snapraid also protects against bit rot). But neither raid nor zfs (or similar) are a substitute for backups, but for a media library where you have physical copies or at least can get another copy without too much trouble backup isn’t really critical it’s just a time saver.

I’d use ZFS via FreeNAS and call it a day, but you’ll need 3-4 disks to do redundancy with something other than RAID1/Mirroring, or you’re blowing through half your space.

If giving up 50% capacity is acceptable (it is for me, but I want slightly better speed for VM hosting and mirrors have better write speed), set up a RAID1 or ZFS mirror and move on with life.

Bear in mind that spin up is when most drives are likely to fail. Drives don’t use a heap of power, and if you’re using sensible NAS hardware neither will the rest of the box. To keep things alive I’d leave the NAS turned on 24/7 personally.

Since 2012, I’ve had 1 drive failure in my FreeNAS box; it’s a 2x ZFS mirror, on its third set of drives at the moment due to capacity expansion mostly - the original set of drives that went into it were from 2006-2007 so they were old even back in 2012 :smiley: .

It stays turned on 24/7 as I also use it for next cloud, local time machine backups, etc. Turning it off and on would be a pain in the ass for those things to happen automatically over wifi as well as the hardware failure potential increasing with power cycles.

1 Like

Well, what might be a simple question but does not have a simple answer. For me, it depends on various factors. As I understand it this is going to be simple setup hardware wise with stuff you already have and adding 2 new HDD. You also say your drives failed rapidly/ a lot could you give some context to that? The drives where both 8TB IronWolfs I suppose? Did you buy them new and together? How long did they last each in total? You also mentioned a SAS card and SATA.

If those 2 (new) drives failed before you got 2-3 years off acceptable use out of them. Then I consider that very odd or you had a really bad batch of disks. High Temperature, humidity, vibration, dust, and unstable power/data connection, etc… could all have an influence. About the SAS why do you have those I would if possible just stick with SATA as it’s the standard for those drives. If you are sure that external factors could not have had an influence or are negligible.

I would pay extra attention to the power delivered and verify that. Because I personally never had a drive fail the last 12 years that I didn’t replace myself first. I still have 10-year old wd black’s spinning occasionally. Also @ work in our desktops and servers, we still run very old hardware (8 years old) and the systems are on 24/7 for 90% of the year. Now there’s not a lot of data flowing between them but still. Even a crappy 5 - 6±year-old HDD in my PlayStation 4 still runs and I used it a lot while in a warm environment and left it on or in standby mostly.

Unless you (ab)used them Intense daily and/or writing a lot of changing data on them. I don’t see why those drives would fail that fast. If you decide to keep your platform unchanged and in order.

I would say no raid and do DAS style backups when you add more media. just out of precaution

Hello, thanks for the reply.

I added the drives to my NAS and for a few days, they seem to work correctly. File transfer and everything was great. The just one day, they start making weird noises and the PC no longer recognize them. In some cases, they last about a month but a recent one I purchased, lasted just 2 days. I know I am a little vague with the info, but I do not know more about their failure mode.

Yes, I tried avoiding WD due to the SMR issue. To be honest, I don’t understand nothing about that issue. But I told my self, lets not look for trouble.

Correct, both new and same batch. I checked the production date and they have the same date. February, 2020.

Tried that, it got worse, I made another post in the forum because it is a different machine I put together with old pc parts. Link. In short 2 more failed after that test.

I have only used BTRFS as a layman in raid 1 for at least 4 years, its Had power outages …overclocking failures . MANY installs of the OS .
2 drive changes and 1 total copy to a new BTRFS pool.

Well for a NAS device I could recommend BTRFS as put through the wringer. Current pool of makeshift crap after 4 years. Started out up to 6 drives 1.5TB to the max 3TB. I should add more drives soon Im lazy

Label: ‘btrfs’ uuid: boo hoo
Total devices 4 FS bytes used 7.43TiB
devid 1 size 7.28TiB used 6.22TiB path /dev/sdc
devid 2 size 3.64TiB used 2.58TiB path /dev/sdd
devid 4 size 5.46TiB used 4.40TiB path /dev/sdf
devid 5 size 2.73TiB used 1.67TiB path /dev/sde

The billion dollar ZFS may not be the driod you are look for !

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.