Yet another FreeNAS build...help!

Hehe… Was just about to touch on the topic. Was looking at this badblocks test as well https://www.ixsystems.com/community/resources/hard-drive-burn-in-testing.92/

Will also run all the SMART tests per drive, plus your suggestion. Thanks!

Again, thanks to both of you for your continued advice :smiley:

Interestingly, I’ve got a WD Red Pro dying on one of the Synology units. Bad sectors, fun.

Will RMA it off, but I don’t really think the RMA replaced drives have much life in them.

I’m not super familiar with burn-in tests and what’s “industry standard”

Anything that puts some stress on the drives should be good.

2 Likes

that’s because WD reds are consumer type drives. should use nearline, enterprise drives with zfs

1 Like

I would have gone with WD Ultrastar (rebranded HGST) but there’s no non-OEM supply on Amazon.

Regards the 2 unconnected sockets, could you use 2 reverse breakout cables?

I don’t know how many sata sockets available on the mobo, but I use a couple of reverse breakouts, and it does throttle the drives via DMI or whatever, but for Rust it’s okay?

Also, can you ZPOOL attach a new drive to an existing raidz member, or is that for mirrors only?
Otherwise might one attach a new drive, then remove the old later on? Like a slower/ 2 step, manual replace?

For mirroring only

:frowning: shame oh well

My understanding is that they’re working on RAIDZ expansion, but that’s a long way out and it’s likely to be a different vdev type entirely.

At least, that’s what I’m remembering.

1 Like

Oh I think I may have misread you before. You can attach to a raidz member and it will mirror it.

Raidz expansion though is kind of stalled. It needs people to test and review the current WIP code and provide feedback. I have that on my list of things to do and just started looking at it again yesterday.

Pssst

I suspect it boots only on oldddd agesa since all my systems are newer agesa

1 Like

More like there is a roughed out a PoC but nobody is actively working on it atm. Actually I’m in the process of rebasing the PR on the current master to see how much damage was done by the restructuring that’s been going on as of late. But basically the PR needs design review and testing and then finishing

1 Like

Ah, that makes sense. thanks for the update!

2 Likes

Heads up, I’ve modified the above script to burn-in my 10TB drives. As per @SgtAwesomesauce I’ve added a run of dd to zero out the disks before running badblocks.

5 Likes

Having spent 1 week burning in just 4x 10TB drives, today I wanted to start the resilvering process. Booted my FreeNAS box that has been off for a while, boom - 2x HDDs are dead :anguished:

Hobbling on 6x drives, all my parity drives are gone. Wish me luck. Backups? What backups… :rofl:

  pool: big
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Nov  7 19:01:24 2019
	2.17T scanned at 1.51G/s, 61.9G issued at 157M/s, 23.0T total
	7.48G resilvered, 0.26% done, 1 days 18:35:07 to go
config:

	NAME                                                STATE     READ WRITE CKSUM
	big                                                 DEGRADED     0     0     0
	  raidz2-0                                          DEGRADED     0     0     0
	    gptid/294440f5-7426-11e7-9d9c-2c4d54456e1f.eli  ONLINE       0     0     0
	    gptid/2a0d7ca7-7426-11e7-9d9c-2c4d54456e1f.eli  ONLINE       0     0     0
	    gptid/d3f98b7d-0162-11ea-864f-b4969130e724.eli  ONLINE       0     0     0  (resilvering)
	    gptid/2ba2a597-7426-11e7-9d9c-2c4d54456e1f.eli  ONLINE       0     0     0
	    gptid/2c63c532-7426-11e7-9d9c-2c4d54456e1f.eli  ONLINE       0     0     0
	    15947505727906227010                            UNAVAIL      0     0     0  was /dev/gptid/2d2af08b-7426-11e7-9d9c-2c4d54456e1f.eli
	    gptid/2ded0a48-7426-11e7-9d9c-2c4d54456e1f.eli  ONLINE       0     0     0
	    gptid/2eb4f9d6-7426-11e7-9d9c-2c4d54456e1f.eli  ONLINE       0     0     0

errors: No known data errors
1 Like

Oooof. Good luck.

1 Like

It’s pretty dead, wow…

  pool: big
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Nov  7 19:01:24 2019
	2.46T scanned at 418M/s, 443G issued at 73.6M/s, 23.0T total
	44.0G resilvered, 1.88% done, 3 days 17:19:43 to go
config:

	NAME                                                STATE     READ WRITE CKSUM
	big                                                 DEGRADED     0     0  964K
	  raidz2-0                                          DEGRADED     0     0 2.45M
	    gptid/294440f5-7426-11e7-9d9c-2c4d54456e1f.eli  DEGRADED     0     0     0  too many errors  (resilvering)
	    gptid/2a0d7ca7-7426-11e7-9d9c-2c4d54456e1f.eli  DEGRADED     0     0     0  too many errors  (resilvering)
	    gptid/d3f98b7d-0162-11ea-864f-b4969130e724.eli  ONLINE       0     0     0  (resilvering)
	    gptid/2ba2a597-7426-11e7-9d9c-2c4d54456e1f.eli  FAULTED      2    78     0  too many errors
	    gptid/2c63c532-7426-11e7-9d9c-2c4d54456e1f.eli  DEGRADED     0     0     0  too many errors  (resilvering)
	    15947505727906227010                            UNAVAIL      0     0     0  was /dev/gptid/2d2af08b-7426-11e7-9d9c-2c4d54456e1f.eli
	    gptid/2ded0a48-7426-11e7-9d9c-2c4d54456e1f.eli  DEGRADED     0     0     0  too many errors  (resilvering)
	    gptid/2eb4f9d6-7426-11e7-9d9c-2c4d54456e1f.eli  DEGRADED     0     0     0  too many errors  (resilvering)

errors: 525903 data errors, use '-v' for a list

So, thankfully I have a Synology that did rsync backups of my critical stuff. Apart from that, this pool is pretty much dead.

I wonder if this is due to a backplane failure?