I’ve traced this to SATA port 0 on Asus X99-E WS/USB3.1 board - scbus0 should be SATA0
ada0 at ahcich0 bus 0 scbus0 target 0 lun 0
ada0: <WDC WD40EFRX-68N32N0 82.00A82> ACS-3 ATA SATA 3.x device
ada0: Serial Number WD-xxx
ada0: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 3815447MB (7814037168 512 byte sectors)
ada0: quirks=0x1<4K>
In any case, this doesn’t seem like it failed, as I’ve googled and seen logs where it retries a couple times before exhausting with ‘retry failed’.
Also a Flush cache operation may not be that dangerous, say compared to a failed physical write to disk?
In any case, there has been no warnings from the ZFS pool etc. SMART data looks good too. I run an extended SMART test once a week too, and the short-test daily; cronjob output is mailed to me twice daily.
I had a very similar error on one of the drives in my own NAS. There was a 50/50 chance that I’d get such an error message the night after booting the NAS (it has this nasty habit of only mailing me at 3AM local time).
Unplugging and re-plugging the cables did the trick for me, I haven’t received a single warning since.
Thanks @SgtAwesomesauce - I’ll wait a bit longer and see if this drive gives further trouble but between the earlier SATA error and this SMART report, I have a bad feeling lol.
That’s the thing. Something about it just rubs me the wrong way, so personally, I’d absolutely make sure you get it replaced within the warranty period.
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 1
No Errors Logged
Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
Short offline Completed without error 00% 2174 -
So here’s question - this zpool is running on 8x WD Red 4TB NAS drives. I’ve got 3x WD Red Pro 4TB drives doing nothing - could I swap in a Red Pro? From what I’ve read, it’s recommended to use the ‘same drive size’ across the zdev but alternative speeds - would that be an issue? Reds are 5400rpm, Red Pro 7200rpm.
There are also minor firmware differences with how they handle errors, if I remember correctly, but that, again, is not something you’re going to notice.
Right, that was largely something I was pondering about… Since I migrated (And right now not using) my 5-bay DSM1518+ Synology, got 3x Red Pro 4TB left. I can keep one as a disaster replacement spare, and add the other two to the array.
It is going to be a tight fit inside the Corsair Air 740 which already had 8-drives held together by 3D printed bits.
However, at RAIDz3, 8-drives only gave 16TB (~55%) but by making it 10 drives I can hit 23TB (64.35%) - since I’m after redundancy, I’ll go RAIDz3.
Any downsides to RAIDz3 vs 2 apart from potentially less ‘usable’ space? Triple parity is as the name says so I’m just reading the label right off the ‘tin’ as it were
EDIT - I got so exited that my 10 drives would let me do RAIDz3 with 64% available space - but I forgot this means migrating my data, killing the vdev, and recreating a fresh RAIDz3 pool across the 10 drives, then migrate all the data back.
Ugh no, can’t do that - BUT I can replace the failing drive for now and RMA the sucker.
You’re using 4TB drives, so I’d recommend raidz2 as the baseline, and if you’re paranoid, go with raidz3. It’s going to have more CPU overhead (only slightly) when calculating parity, and it’s also going to have more actual reads/writes per user-perceptible read/write. That’s about all I can say about z3 vs z2.
If you want to increase space, you’re going to want to replace the drives with larger. ZFS will allow you to upgrade a vdev, in place, with larger disks.
I’ve heard that when you add a second VDEV it has to be the same size, however I’ve tested with virtual disks and there is no issue adding VDEVs of different sizes. So I imagine there’s a performance hit to do the way that data is striped across VDEVs but you can add a smaller (or larger) VDEV to a pool.
Right exactly, what I’ve also read online is to ensure the second vdev is of the same size/capacity, i.e. 8x 4TB.
-OR- I would assume, go through the existing vdev, replacing each drive, resilvering each (say to 6TB) and grow the size of the entire vdev to 8x 6TB drives. This would be an expensive affair given the cost of 6TB drives vs, buying 5x 4TB Reds (they are the cheapest at $137 a pop + coupling the 3x WD Red Pro 4TBs I have left (from my earlier migration).
Those 5x drives would set me back $685.25 which is the cheapest option right now. The other issue I face is that I’ve already used 8 of 12 SATA ports on the X99 WS-E/USB3.1 board, and I’d have to use the LSI card I have to connect the additional vdev of x8 drives. You’ll notice the janky 3D-printed HDD mounting solution I’ve used in the Corsair Air 740 case.
I was hoping to do this later in 2018 once I have a chance to procure a decent 24-bay rack chassis with GPIO backplane - will see how it goes.