Help me figure this out

Ok, I have a FreeNas media server that has 5 hard drives in it, lets say they are numbered by the OS as sda1, sda2, sda3, sda4, and sda5, Freenas is reporting errors on sda3 and I have a replacement drive, but I want to pull the server from the rack, locate the drive, replace the drive and let ZFS rebuild the pool, my problem is this......

Once I pull the server out of the rack and open it up how do I determine which drive is sda3? all are identical 3tb WD red drives, is there a easy way to figure this out?

Sorry if this is a stupid question or if the answer is easy.....thanks in advance.

1 Like

have you tried renaming them something more specific, then pulling the sata plug (during shutdown) and seeing which one disappears when rebooted?

1 Like

Hard drives have unique Serial Numbers, printed on the drive. In linux you can get them with tools like hdparm etc. There probably is something similar in BSD. A quick google do suggest getting the serial is already build in to freenas, as expected.

1 Like

I thought about that but renaming them will probably destroy the drive pool since it is set up with the names it has, I really can't do that since there is over 5tb of data stored on the NAS.

Yep...that sounds like the ticket...thanks I'll check it out.

Hey there...

So what you want to do is from the Freenas GUI, click storage, then highlight the top of the storage 'tree' - whatever line item is on top. Then click View Disks, on this page freenas will list the serial numbers.

What error is Freenas reporting?

Freenas has been a little sketchy (in my experience as of late) in reporting false smart errors.

I'd suggest:

Enable SSH from the Services section
Download Putty
Login to the same IP as your freenas server in putty (port 22 by default I think)
login as "root"

then type: "smartctl -a /dev/sda3/" (or whatever drive is reporting the error)

Post the results.

Here's the message I'm it looks like it's two drives failing and the pool degraded.

Checking status of zfs pools:
freenas-boot  7.19G  1.91G  5.27G         -      -    26%  1.00x  ONLINE  -
zfs           13.6T  10.7T  2.93T         -    15%    78%  1.00x  DEGRADED  /mnt

  pool: zfs
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
  scan: scrub repaired 0 in 10h1m with 0 errors on Sun Nov 22 10:01:31 2015

        NAME                                            STATE     READ WRITE CKSUM
        zfs                                             DEGRADED     0     0     0
          raidz2-0                                      DEGRADED     0     0     0
            gptid/cdf947be-b3a6-11e4-8801-d05099478e38  ONLINE       0     0     0
            gptid/ce620209-b3a6-11e4-8801-d05099478e38  DEGRADED     0     0 32.3K  too many errors
            gptid/cecdb7a1-b3a6-11e4-8801-d05099478e38  DEGRADED     0     0 32.3K  too many errors
            gptid/cf3544cc-b3a6-11e4-8801-d05099478e38  ONLINE       0     0     0
            gptid/cfa2402b-b3a6-11e4-8801-d05099478e38  ONLINE       0     0     0

errors: No known data errors

-- End of daily output --

This is from the daily email I get....I'm at work so I can't check it, looks like I need two drives.

Yeah, that doesn't look good.... but anyways, I listed above how to identify which drive is which by comparing serial numbers.

Good luck, resilver one at a time!

1 Like

Thank you....with you guys help I figured it out, just got to get another drive and replace both of them.....I did find the serial numbers and again thanks for the help it's much appreciated.

Cool, feel free to PM if you need future help.

1 Like