Need help recovering a filesytem from a failed Seagte 2TB raid1 NAS

Hi,

this awesome video brought me here. I have been working at recovering data from this failed drive off and on for about a 2 weeks. I have tried many tools and techniques gathered from all over the interwebs but nothing has proven successful. Mostly, due to my noobness in Linux os. So here goes the skinny... I have done a 'ddresque' copy from the remaining good drive from the failed NAS (3 times already). Running testdisk on a prior copy, I downgraded the ext4 to ext3, eventually to an ext2 filesystem. I have been very careful about hitting enter on "Fix?" since then.

///////////////////////////////////////**Results from ddresque:**
Press Ctrl-C to interrupt
Initial status (read from logfile)
rescued: 49823 MB, errsize: 0 B, errors: 0
Current status
rescued: 1996 GB, errsize: 1384 kB, current rate: 0 B/s
* ipos: 967945 MB, errors: 38, average rate: 19051 kB/s*
* opos: 967945 MB, time from last successful read: 6 s*
*Finished *
*[email protected]:~$ *

///////////////////////////////////////////**Attempting to mount
[email protected]:~$ sudo mount -o loop /media/508299B3520D7A95/vaultRescueV2.img /mnt/myRescue
*[sudo] password for mike: *
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
* missing codepage or helper program, or other error*
* In some cases useful info is found in syslog - try*
* dmesg | tail or so*

[email protected]:~$ dmesg | tail
[138829.128113] end_request: I/O error, dev sda, sector 1899023600
[138829.128128] Buffer I/O error on device dm-0, logical block 236314734
[138829.128161] ata1: EH complete
[183851.681788] EXT4-fs (loop0): bad block size 65536
[222442.182814] sky2 0000:02:00.0 eth0: Link is down
[222444.576789] sky2 0000:02:00.0 eth0: Link is up at 1000 Mbps, full duplex, flow control both
[222472.718725] usb 2-2: reset high-speed USB device number 5 using ehci-pci
[271971.406397] EXT4-fs (loop0): bad block size 65536
[272794.713068] EXT4-fs (loop0): bad block size 65536
[277896.589059] EXT4-fs (loop0): bad block size 65536
*[email protected]:~$ *

///////////////////////////////////////////**Results of sudo fsck.ext4**

[email protected]:~$ sudo fsck.ext4 -v /media/508299B3520D7A95/vaultRescueV2.imge2fsck 1.42 (29-Nov-2011)
/media/508299B3520D7A95/vaultRescueV2.img was not cleanly unmounted, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Inode 19923024 ref count is 14, should be 13. Fix? no

Inode 19923056 ref count is 10, should be 9. Fix? no
.........
lots of prompts to fix things
Inode ##### ref count is #, should be #. Fix? no
lots more prompts to fix things
.........
Free inodes count wrong (29846029, counted=29882242).
Fix? no

media/508299B3520D7A95/vaultRescueV2.img: ********** WARNING: Filesystem still has errors *********

  • 509171 inodes used (1.68%)*
  • 9119 non-contiguous files (1.8%)*
  • 1 non-contiguous directory (0.0%)*
  • # of inodes with ind/dind/tind blocks: 0/0/0*
  • Extent depth histogram: 476536/423*
    23526715 blocks used (77.25%)
  • 0 bad blocks*
  • 28 large files*

  • 440050 regular files*

  • 36910 directories*
  • 0 character device files*
  • 0 block device files*
  • 0 fifos*
  • 0 links*
  • 1 symbolic link (1 fast symbolic link)*
  • 0 sockets*
    --------
  • 476961 files*

//////////////////////////////////////
Any suggetions?? Am I beating a dead horse?

There's some things I don't understand in what you describe, if your NAS has a good drive you should be able to read from that.

It's helpful to know the full commands used to make the images, it looks like you could be mixing up disk and partition images and commands.

Thanks for your reply. I get errors when attempting to mount the "good" drive...see below:

///////////////////////////////////////////**Attempting to mount
[email protected]:~$ sudo mount -o loop /media/508299B3520D7A95/vaultRescueV2.img /mnt/myRescue
*[sudo] password for mike: *
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
* missing codepage or helper program, or other error*
* In some cases useful info is found in syslog - try*
* dmesg | tail or so*

[email protected]:~$ dmesg | tail
[138829.128113] end_request: I/O error, dev sda, sector 1899023600
[138829.128128] Buffer I/O error on device dm-0, logical block 236314734
[138829.128161] ata1: EH complete
[183851.681788] EXT4-fs (loop0): bad block size 65536
[222442.182814] sky2 0000:02:00.0 eth0: Link is down
[222444.576789] sky2 0000:02:00.0 eth0: Link is up at 1000 Mbps, full duplex, flow control both
[222472.718725] usb 2-2: reset high-speed USB device number 5 using ehci-pci
[271971.406397] EXT4-fs (loop0): bad block size 65536
[272794.713068] EXT4-fs (loop0): bad block size 65536
[277896.589059] EXT4-fs (loop0): bad block size 65536
*[email protected]:~$ *

The command I used to copy the filesystem:

sudo ddrescue -b 8192 -f -n -S /media/508299B3520D7A95/vaultRescueV2.img logfile

The command I used to check/repair the filesystem:

sudo fsck.ext4 -v /media/508299B3520D7A95/vaultRescueV2.img

The command I ran to mount filesystem:

  • sudo mount -o loop /media/508299B3520D7A95/vaultRescueV2.img /mnt/myRescue*

Okay... let's restart this little questionaire...

You pulled a image file of the good drive (the working one of your RAID1 mirror) via ddrescue. At the attempt of mounting the "workingdrive.img" file you get:

^-- ???

Well... what is the /dev of the drive you assume, or assumed, "working"? Could that be /dev/sda? Because...

The drive at /dev/sda is clearly reporting a fault. From experience I'd say the drive suffers bad sectors. Could be due to age, degradation of the physical platters, head starting to fail et al.

Anyhow, since you have a image file for fsck'ing... why don't you simply make a "work copy" of that one and unleash e2fsck on it to see how it turns out? For as long as you work with a copy there's nothing that could go wrong. If the repaired file turns out to be unmountable or contain a totally messed up filesystem just delete it, create a new work copy and proceed with the next shot at recovering the data.

Also ... just for good practice: I would copy the image file onto a working computer having known working hard drives and try to mess about with data rescue there. Trying to achieve something on a system that is half-dead usually leads nowhere - especially since /dev/sda seems to have issues as well (assuming that is the drive you assumed "working").

One thing that escapes me a bit too: why did you actually tell ddrescue "-b 8192"? I don't see any good reason for setting a blocksize ... and ddrescue is actually meant to run on a physical drive and not a image file (a image file shouldn't have "read fails" because it is a image where "read errors" that happened while reading from the physical drive are zeroed out ... it can only have read fails if the image file can't be read because the drive on which it resides is about dead).

There could some problems with how you've started the recovery

outputs of

mount

and

ls -l /dev/disk/by-id/

will answer 2 important questons, what is mounted at /media/508299B3520D7A95 what block device is the failed disc connected as.

The the format for ddrescue should be something like:

sudo ddrescue -f -n -S /dev/ /recovery.img logfile

You will then need to understand how to mount partitions within that image:

https://help.ubuntu.com/community/DataRecovery#Extract_filesystem_from_recovered_image

you forgot the offset when attempting to mount from an img file

1 Like

Yes and yes.

I am working on a good machine that has the "working" 2TB NAS drive plugged in. I am copying the image onto an external 3TB usb drive since the computer I'm working on HD is not large enough. External 3TB drive location:

I was able to get the data to copy much faster using this flag. In the area of 19051 kB/s vs 1500kB/s without the flag
*

results:

[email protected]:~$ ls -l /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root 9 Jan 23 19:01 ata-Optiarc_DVD_RW_AD-7203S -> ../../sr0
lrwxrwxrwx 1 root root 9 Jan 23 19:01 ata-ST2000DM001-1CH164_S1E1GR55 -> ../../sda
lrwxrwxrwx 1 root root 10 Jan 23 19:01 ata-ST2000DM001-1CH164_S1E1GR55-part1 -> ../../sda1
lrwxrwxrwx 1 root root 11 Jan 23 19:01 ata-ST2000DM001-1CH164_S1E1GR55-part10 -> ../../sda10
lrwxrwxrwx 1 root root 10 Jan 23 19:01 ata-ST2000DM001-1CH164_S1E1GR55-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jan 23 19:01 ata-ST2000DM001-1CH164_S1E1GR55-part3 -> ../../sda3
lrwxrwxrwx 1 root root 10 Jan 23 19:01 ata-ST2000DM001-1CH164_S1E1GR55-part4 -> ../../sda4
lrwxrwxrwx 1 root root 10 Jan 23 19:01 ata-ST2000DM001-1CH164_S1E1GR55-part5 -> ../../sda5
lrwxrwxrwx 1 root root 10 Jan 23 19:01 ata-ST2000DM001-1CH164_S1E1GR55-part6 -> ../../sda6
lrwxrwxrwx 1 root root 10 Jan 23 19:01 ata-ST2000DM001-1CH164_S1E1GR55-part7 -> ../../sda7
lrwxrwxrwx 1 root root 10 Jan 23 19:01 ata-ST2000DM001-1CH164_S1E1GR55-part8 -> ../../sda8
lrwxrwxrwx 1 root root 10 Jan 23 19:01 ata-ST2000DM001-1CH164_S1E1GR55-part9 -> ../../sda9
lrwxrwxrwx 1 root root 9 Jan 23 19:18 ata-ST3000DM001-1ER166_W501HKAQ -> ../../sdc
lrwxrwxrwx 1 root root 10 Jan 23 19:18 ata-ST3000DM001-1ER166_W501HKAQ-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Jan 23 19:18 ata-ST3000DM001-1ER166_W501HKAQ-part2 -> ../../sdc2
lrwxrwxrwx 1 root root 9 Jan 23 19:01 ata-WDC_WD6400AAKS-22A7B0_WD-WMASY1242988 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jan 23 19:01 ata-WDC_WD6400AAKS-22A7B0_WD-WMASY1242988-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jan 23 19:01 ata-WDC_WD6400AAKS-22A7B0_WD-WMASY1242988-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jan 23 19:01 ata-WDC_WD6400AAKS-22A7B0_WD-WMASY1242988-part3 -> ../../sdb3
lrwxrwxrwx 1 root root 10 Jan 23 19:01 ata-WDC_WD6400AAKS-22A7B0_WD-WMASY1242988-part4 -> ../../sdb4
lrwxrwxrwx 1 root root 10 Jan 23 19:01 ata-WDC_WD6400AAKS-22A7B0_WD-WMASY1242988-part5 -> ../../sdb5
lrwxrwxrwx 1 root root 10 Jan 23 19:01 ata-WDC_WD6400AAKS-22A7B0_WD-WMASY1242988-part6 -> ../../sdb6
lrwxrwxrwx 1 root root 10 Jan 23 19:01 dm-name-vg8-lv8 -> ../../dm-0
lrwxrwxrwx 1 root root 10 Jan 23 19:01 dm-uuid-LVM-ixfiHqFHoDWkMCMr3J4cJRJfyWUZWz5w33ylavd3duOU6LaBLVKzM60SUHxLmJMW -> ../../dm-0
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-name-BA-0010753833EC:8 -> ../../md8
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-name-none:0 -> ../../md0
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-name-none:1 -> ../../md1
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-name-none:2 -> ../../md2
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-name-none:3 -> ../../md3
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-name-none:4 -> ../../md4
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-name-none:5 -> ../../md5
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-name-none:6 -> ../../md6
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-name-none:7 -> ../../md7
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-uuid-156a12bd:b6d4ac6b:08cca1d7:4d5484ba -> ../../md7
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-uuid-234ccc79:d44cde4c:5396db79:48453a3f -> ../../md3
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-uuid-81051e63:47bc134a:8f504a55:00c4cae0 -> ../../md8
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-uuid-a480c77b:e6dc0018:5fe58e94:b68da1c4 -> ../../md1
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-uuid-aacb6f52:b4aac3bd:2f54c70e:99daea83 -> ../../md4
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-uuid-ad7a9a30:d164a302:aafeca18:b6842bd8 -> ../../md5
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-uuid-cd35c1af:2c2054f2:24b2e285:fc37691f -> ../../md6
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-uuid-d1f28a5b:43ced273:2fdbe513:bc1ca8df -> ../../md2
lrwxrwxrwx 1 root root 9 Jan 23 19:01 md-uuid-e29102ea:7d36b1f6:1fb06ff6:f66156d3 -> ../../md0
lrwxrwxrwx 1 root root 9 Jan 23 19:01 scsi-SATA_ST2000DM001-1CH_S1E1GR55 -> ../../sda
lrwxrwxrwx 1 root root 10 Jan 23 19:01 scsi-SATA_ST2000DM001-1CH_S1E1GR55-part1 -> ../../sda1
lrwxrwxrwx 1 root root 11 Jan 23 19:01 scsi-SATA_ST2000DM001-1CH_S1E1GR55-part10 -> ../../sda10
lrwxrwxrwx 1 root root 10 Jan 23 19:01 scsi-SATA_ST2000DM001-1CH_S1E1GR55-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jan 23 19:01 scsi-SATA_ST2000DM001-1CH_S1E1GR55-part3 -> ../../sda3
lrwxrwxrwx 1 root root 10 Jan 23 19:01 scsi-SATA_ST2000DM001-1CH_S1E1GR55-part4 -> ../../sda4
lrwxrwxrwx 1 root root 10 Jan 23 19:01 scsi-SATA_ST2000DM001-1CH_S1E1GR55-part5 -> ../../sda5
lrwxrwxrwx 1 root root 10 Jan 23 19:01 scsi-SATA_ST2000DM001-1CH_S1E1GR55-part6 -> ../../sda6
lrwxrwxrwx 1 root root 10 Jan 23 19:01 scsi-SATA_ST2000DM001-1CH_S1E1GR55-part7 -> ../../sda7
lrwxrwxrwx 1 root root 10 Jan 23 19:01 scsi-SATA_ST2000DM001-1CH_S1E1GR55-part8 -> ../../sda8
lrwxrwxrwx 1 root root 10 Jan 23 19:01 scsi-SATA_ST2000DM001-1CH_S1E1GR55-part9 -> ../../sda9
lrwxrwxrwx 1 root root 9 Jan 23 19:01 scsi-SATA_WDC_WD6400AAKS-_WD-WMASY1242988 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jan 23 19:01 scsi-SATA_WDC_WD6400AAKS-_WD-WMASY1242988-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jan 23 19:01 scsi-SATA_WDC_WD6400AAKS-_WD-WMASY1242988-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jan 23 19:01 scsi-SATA_WDC_WD6400AAKS-_WD-WMASY1242988-part3 -> ../../sdb3
lrwxrwxrwx 1 root root 10 Jan 23 19:01 scsi-SATA_WDC_WD6400AAKS-_WD-WMASY1242988-part4 -> ../../sdb4
lrwxrwxrwx 1 root root 10 Jan 23 19:01 scsi-SATA_WDC_WD6400AAKS-_WD-WMASY1242988-part5 -> ../../sdb5
lrwxrwxrwx 1 root root 10 Jan 23 19:01 scsi-SATA_WDC_WD6400AAKS-_WD-WMASY1242988-part6 -> ../../sdb6
lrwxrwxrwx 1 root root 9 Jan 23 19:18 scsi-SSeagate_Expansion+_DeskNA8B08M5 -> ../../sdc
lrwxrwxrwx 1 root root 10 Jan 23 19:18 scsi-SSeagate_Expansion+_DeskNA8B08M5-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Jan 23 19:18 scsi-SSeagate_Expansion+_DeskNA8B08M5-part2 -> ../../sdc2
lrwxrwxrwx 1 root root 9 Jan 23 19:01 usb-IOI_CF_MicroDrive_20060413092100000-0:0 -> ../../sdd
lrwxrwxrwx 1 root root 9 Jan 23 19:01 usb-IOI_MS_MsPro_20060413092100000-0:3 -> ../../sdg
lrwxrwxrwx 1 root root 9 Jan 23 19:01 usb-IOI_SD_MMC_20060413092100000-0:2 -> ../../sdf
lrwxrwxrwx 1 root root 9 Jan 23 19:01 usb-IOI_SM_xD-Picture_20060413092100000-0:1 -> ../../sde
lrwxrwxrwx 1 root root 9 Jan 23 19:01 wwn-0x5000c5005c381ed4 -> ../../sda
lrwxrwxrwx 1 root root 10 Jan 23 19:01 wwn-0x5000c5005c381ed4-part1 -> ../../sda1
lrwxrwxrwx 1 root root 11 Jan 23 19:01 wwn-0x5000c5005c381ed4-part10 -> ../../sda10
lrwxrwxrwx 1 root root 10 Jan 23 19:01 wwn-0x5000c5005c381ed4-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jan 23 19:01 wwn-0x5000c5005c381ed4-part3 -> ../../sda3
lrwxrwxrwx 1 root root 10 Jan 23 19:01 wwn-0x5000c5005c381ed4-part4 -> ../../sda4
lrwxrwxrwx 1 root root 10 Jan 23 19:01 wwn-0x5000c5005c381ed4-part5 -> ../../sda5
lrwxrwxrwx 1 root root 10 Jan 23 19:01 wwn-0x5000c5005c381ed4-part6 -> ../../sda6
lrwxrwxrwx 1 root root 10 Jan 23 19:01 wwn-0x5000c5005c381ed4-part7 -> ../../sda7
lrwxrwxrwx 1 root root 10 Jan 23 19:01 wwn-0x5000c5005c381ed4-part8 -> ../../sda8
lrwxrwxrwx 1 root root 10 Jan 23 19:01 wwn-0x5000c5005c381ed4-part9 -> ../../sda9
lrwxrwxrwx 1 root root 9 Jan 23 19:18 wwn-0x5000c50089ecef10 -> ../../sdc
lrwxrwxrwx 1 root root 10 Jan 23 19:18 wwn-0x5000c50089ecef10-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Jan 23 19:18 wwn-0x5000c50089ecef10-part2 -> ../../sdc2
lrwxrwxrwx 1 root root 9 Jan 23 19:01 wwn-0x50014ee0ab46fd27 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jan 23 19:01 wwn-0x50014ee0ab46fd27-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jan 23 19:01 wwn-0x50014ee0ab46fd27-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jan 23 19:01 wwn-0x50014ee0ab46fd27-part3 -> ../../sdb3
lrwxrwxrwx 1 root root 10 Jan 23 19:01 wwn-0x50014ee0ab46fd27-part4 -> ../../sdb4
lrwxrwxrwx 1 root root 10 Jan 23 19:01 wwn-0x50014ee0ab46fd27-part5 -> ../../sdb5
lrwxrwxrwx 1 root root 10 Jan 23 19:01 wwn-0x50014ee0ab46fd27-part6 -> ../../sdb6

Great idea! Thank you all for looking into my problem.

Oh my god... /dev/sda is a ST2000DM001 (plus the ST3000DM001 you got there at /dev/sdc isn't any better - expect the drive to potentially develop the same failure). They are famous for suddenly dying without giving any warnings; even Wendell slammed Seagate about it in some Tek episode.

I had three of these ST2000DM001 ("DM" as in "Death Master"?). Two died within 2-3 months, the third one curiously still works though doesn't hold any important or irreplaceable data. I was lucky enough to get the data off the dying drive right in time; at the next reboot the drive didn't even show up on the SATA bus anymore.

I would get rid of them and stay clear of Seagate in the future ... WD or Hitachi or Toshiba spinning rust is clearly worth the extra few bucks they are asking for (about a year ago I replaced a 1.5TB WD Green which ran almost continously 24/7/365 for 5 years as the "Relocation sector count" went past the threshold ... that's what I call 'safe fail mode').