RAID. Is it worth it for a home server?

No, it’s do you like having uptime.

3 Likes

I don’t understand what you mean by that?

five nines

thanks for clarifying

1 Like

You’re not wrong to be suspicious, but from reputable sellers with a major third party to back the warranty claims up it hasn’t been an issue.

Most “refurbished” drives come from server pulls. Basically a whole array of drives is getting up in age, and a few drives have failed. Rather than fuck around with individual drives, the company just replaces the whole set in one go. Depending on the security required, the old drives are then either destroyed, or in some cases just wiped and sold on the cheap to a third company, which runs some tests, slaps a warranty on them and sells them off. The warranty shouldn’t be an issue if you get them from amazon. Some sellers wipe the smart data, which is an asshole thing to do, but not the end of the world. There are also some cases where the circuit board of the drive fails, which is actually fairly easy and cheap to replace. Additionally, some drives just get stuck on a software bug, and literally just need to be (fully) turned off and on again, but a lot of enterprises didn’t have time or the awareness to tell the difference, which is why a lot of drives now have a feature where power on the 3.3v pin forces a complete power shutoff.

I have several older 8TB HGST drives acquired through this method that still work flawlessly a few years later.

However, the pricepoint on new Easystores and Elements far outclass the prices of quality (HGST) refurbished drives, and are what I go with now. Across brands the 10TB “enterprise” and “consumer” drives are as best I can tell are all running out of the same helium faucet (possibly subject to binning). 8TB drives may be helium or air.

1 Like

My sample size is small, but I have also had poor experiences with Seagate drives. I will never buy that brand ever again.

I’m a big proponent on refurbs for some components, but I would never purchase a refurb magnetic drive or SSD. Magnetic drives have moving parts, and SSDs have limited writes by nature.

Shuckin’ is fine. Just don’t shuck a seagate unless you’re OK with your thingie falling off. By thingie I mean your data.

1 Like

i couldn’t agree more out of all my drives i got only 14 used, 4 are hp 2.5" sas drives 10 are hgst 3.5" sata, i would never trust them with anything important
(14 may sound like a lot but that’s about 10% of my drives)

one other thing not all raid setup yield good performance some are better then others

as far as i’m aware the better setups are as follow
RAIDZ1 vdevs should have 3, 5, or 9 devices in each vdev
RAIDZ2 vdevs should have 4, 6, or 10 devices in each vdev
RAIDZ3 vdevs should have 5, 7, or 11 devices in each vdev
and then i personally add 1 hot spare just for my peace of mind that the raid will start rebuilding right away if 1 fail, cold spare are also recommended

1 Like

only time a raid is worth it in a home server is if you have data that you really want to keep and dont want to spend time reacquiring it. if no such data exists and you dont plan on having any such data then no raid is required. otherwise any mirrored array or mirrored striped array is nice. just depends on how fast you want stuff to move and if you have the connection to make it move. mirrored is all most johnny homeowners need with a hot spare or a drive waiting to be used.
just depends on how much you want out of it. worth the cost meh kinda sorta maybe. but you decide.

1 Like

RAID1 wastes a ton of space. 4 10TB drives, you only get 20TB of space. With a 4 drive RAID5 only one of those drives is used for parity, so you get 30TB. Big difference.

I’ve had my Synology NAS for 6 years now and it’s still working great. In that time I’ve had three drives fail, 2 seagates and 1 WD. Without any redundancy I would have lost all my data, so I certainly wouldn’t be comfortable running JBOD or even worse, a stripe.

2 Likes

mirror is the worst for sure but over all raid is a sacrifice for reliability, it still needs to be backup at least once locally for faster recovery and redundancy and once somewhere safe if your location get flooded or catch fire or god knows what else.

so if you do the math overall with raid 1 your data ends up being less then 1/4 of the total space you need

IMHO, drives are cheap enough now that for home use the complexity and expansion issues with RAIDZx are not worth it.

YMMV, but mirrors are so much simpler, and as above if you go 3-way mirror you get a backup drive you can pop out built into the array. Just mount it in a hot-swap bay.

yeah well, you know, depends where are you from and stuff.

I think I agree with the mirrors being simple.

They don’t recommend that anymore…as in they removed it from the site.

Am I the only one who uses mirrors for data drives in my desktop as well as my NAS?
I figure my OS partitions are basically disposable, but my data drives are all mirrored

1 Like

weather they recommend it or not the block size affect the throughput

https://calomel.org/zfs_raid_speed_capacity.html

Spinning platter hard drive raids

The server is setup using an Avago LSI Host Bus Adapter (HBA) and not a raid card. The HBA is connected to the SAS expander using a single multilane cable to control all 24 drives. LZ4 compression is disabled and all writes are synced in real time by the testing suite, Bonnie++. We wanted to test the raid configurations against the drives themselves without LZ4 compression or extra RAM to use for ZFS ARC. You can expect your speeds to easily increase by a factor of two(2) when storing compressible data with LZ4 enabled and when adding RAM at least twice as large as the data sets being read and written.

        ZFS Raid Speed Capacity and Performance Benchmarks
               (speeds in megabytes per second)

1x 4TB, single drive, 3.7 TB, w=108MB/s , rw=50MB/s , r=204MB/s
2x 4TB, mirror (raid1), 3.7 TB, w=106MB/s , rw=50MB/s , r=488MB/s
2x 4TB, stripe (raid0), 7.5 TB, w=237MB/s , rw=73MB/s , r=434MB/s
3x 4TB, mirror (raid1), 3.7 TB, w=106MB/s , rw=49MB/s , r=589MB/s
3x 4TB, stripe (raid0), 11.3 TB, w=392MB/s , rw=86MB/s , r=474MB/s
3x 4TB, raidz1 (raid5), 7.5 TB, w=225MB/s , rw=56MB/s , r=619MB/s
4x 4TB, 2 striped mirrors, 7.5 TB, w=226MB/s , rw=53MB/s , r=644MB/s
4x 4TB, raidz2 (raid6), 7.5 TB, w=204MB/s , rw=54MB/s , r=183MB/s
5x 4TB, raidz1 (raid5), 15.0 TB, w=469MB/s , rw=79MB/s , r=598MB/s
5x 4TB, raidz3 (raid7), 7.5 TB, w=116MB/s , rw=45MB/s , r=493MB/s
6x 4TB, 3 striped mirrors, 11.3 TB, w=389MB/s , rw=60MB/s , r=655MB/s
6x 4TB, raidz2 (raid6), 15.0 TB, w=429MB/s , rw=71MB/s , r=488MB/s
10x 4TB, 2 striped 5x raidz, 30.1 TB, w=675MB/s , rw=109MB/s , r=1012MB/s
11x 4TB, raidz3 (raid7), 30.2 TB, w=552MB/s , rw=103MB/s , r=963MB/s
12x 4TB, 6 striped mirrors, 22.6 TB, w=643MB/s , rw=83MB/s , r=962MB/s
12x 4TB, 2 striped 6x raidz2, 30.1 TB, w=638MB/s , rw=105MB/s , r=990MB/s
12x 4TB, raidz (raid5), 41.3 TB, w=689MB/s , rw=118MB/s , r=993MB/s
12x 4TB, raidz2 (raid6), 37.4 TB, w=317MB/s , rw=98MB/s , r=1065MB/s
12x 4TB, raidz3 (raid7), 33.6 TB, w=452MB/s , rw=105MB/s , r=840MB/s
22x 4TB, 2 striped 11x raidz3, 60.4 TB, w=567MB/s , rw=162MB/s , r=1139MB/s
23x 4TB, raidz3 (raid7), 74.9 TB, w=440MB/s , rw=157MB/s , r=1146MB/s
24x 4TB, 12 striped mirrors, 45.2 TB, w=696MB/s , rw=144MB/s , r=898MB/s
24x 4TB, raidz (raid5), 86.4 TB, w=567MB/s , rw=198MB/s , r=1304MB/s
24x 4TB, raidz2 (raid6), 82.0 TB, w=434MB/s , rw=189MB/s , r=1063MB/s
24x 4TB, raidz3 (raid7), 78.1 TB, w=405MB/s , rw=180MB/s , r=1117MB/s
24x 4TB, striped raid0, 90.4 TB, w=692MB/s , rw=260MB/s , r=1377MB/s

2 Likes

My personal opinion:
For private use i don’t care for RAID. RAID is to maintain uptime if a drive fails. Not to save the data. I personally go with partitioning single disks. I rarely need more than 4-8 TB in a single partition. If you do, Dynamic drives or JBOD can be options.
RAID can also be a hassly when your controller dies. RAID has failure points that can be hard to accomodate for in a private setting.

For any data that i care about (anything that isn’t movies, music or scrap data) i do backups. If a drive fails, it gets replaced and i restore from backups what’s needed.

Concerning Home-Server: I was dabbling in FreeNAS and such too, but in the end i came to the realization that it’s just not worth it for me.
IF you plan on using it as a Lab environment or such, that’s great. For anything productive i’d go with a Synology or QNap NAS. They are low hassle, can do anything you want (literally. With Docker on QNap i did run Grafana and Influxdb Instances, Plex, mySQL, Apache etc. including Console level access).
You get all the features without having to worry about keeping it running. After having spend the better part of 2 years working on a homeserver, i’ve bought a Celeron 4 Bay QNap and haven’t had to work on it for over a year now. It just works and i throw a container or two on it if i want to test something else.

Could get some old hardware and do xpenology.

have you seen gamer nexus nas issues video? pre made solution is more of a problem if they fail free nas just shove the drive in a new system in any order everything work right away

If it is not beaten with a blunt stick to death YES. If you have the gear or the money to run a nas at home. Its is super handy. You can nuke and pave at will and all your stuff is local.

If your a one PC show then the choice shrinks to OS’s needed. If only a laptop then external HD’s only.