Any mdadm experts?

Hi, I’ve found myself in possession of a QNAP that needs deleted files recovered from it. I am having trouble activating the MD array in Ubuntu so I can run ext4magic on it.

The array is an MD RAID10 of 14 disks with LVM2/EXT4 on top (+ an SSD cache). So far I have:

  1. Log into QNAP.

cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] 
md1 : active raid10 sdc3[0] sdo3[14] sdh3[12] sdg3[11] sdr3[10] sdq3[9] sdp3[8] sdn3[7] sdm3[6] sdl3[5] sdk3[4] sdf3[3] sde3[2] sdd3[1]
      40953970176 blocks super 1.0 512K chunks 2 near-copies [14/14] [UUUUUUUUUUUUUU]
      
md2 : active raid0 sdb3[0] sda3[1]
      917792768 blocks super 1.0 512k chunks
      
md280 : active raid1 sdb2[1] sda2[0]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md322 : active raid1 sdh5[13](S) sdg5[12](S) sdr5[11](S) sdq5[10](S) sdp5[9](S) sdo5[8](S) sdn5[7](S) sdm5[6](S) sdl5[5](S) sdk5[4](S) sdf5[3](S) sde5[2](S) sdd5[1] sdc5[0]
      7235136 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sdh2[13](S) sdg2[12](S) sdr2[11](S) sdq2[10](S) sdp2[9](S) sdo2[8](S) sdn2[7](S) sdm2[6](S) sdl2[5](S) sdk2[4](S) sdf2[3](S) sde2[2](S) sdd2[1] sdc2[0]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sdc4[0] sdb4[25] sda4[24] sdh4[13] sdg4[12] sdr4[11] sdq4[10] sdp4[9] sdo4[26] sdn4[7] sdm4[6] sdl4[5] sdk4[4] sdf4[3] sde4[2] sdd4[1]
      458880 blocks super 1.0 [24/16] [UUUUUUUUUUUUUUUU________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sdc1[0] sdb1[25] sda1[24] sdh1[13] sdg1[12] sdr1[11] sdq1[10] sdp1[9] sdo1[26] sdn1[7] sdm1[6] sdl1[5] sdk1[4] sdf1[3] sde1[2] sdd1[1]
      530048 blocks super 1.0 [24/16] [UUUUUUUUUUUUUUUU________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

mdadm --detail --scan >> mdadm.conf

ARRAY /dev/md9 metadata=1.0 name=9 UUID=7f90f491:4531ea85:12400e93:2ca7a64e
ARRAY /dev/md13 metadata=1.0 name=13 UUID=069cac39:2abdd5fa:6d311137:f038c0a8
ARRAY /dev/md256 metadata=1.0 spares=12 name=256 UUID=f3acdf4a:36c07753:edcc7483:1019269d
ARRAY /dev/md322 metadata=1.0 spares=12 name=322 UUID=e74fd628:67501e60:ad706e1b:6deb8e3e
ARRAY /dev/md280 metadata=1.0 name=280 UUID=1394648a:43db2bca:007cf60b:e1f5dd0d
ARRAY /dev/md2 metadata=1.0 name=2 UUID=ebdfb7b5:2ad0f946:a58c1f0e:c74527b3
ARRAY /dev/md1 metadata=1.0 name=1 UUID=350a0b59:1ea5af99:3af0e218:ffc25a18
  1. Booted into Ubuntu live USB

  2. Installed mdadm

  3. Replaced mdadm.conf

  4. sudo mdadm --assemble --scan

  5. No dice. It doesn’t activate all of the arrays. I also notice that the list of Personalities is much shorter (sorry no paste, I didn’t do it remotely).

Any pointers?

I may try to compile ext4magic in QTS instead of using Ubuntu…

2 Likes

no expert but heres a wiki on it if it will help.
https://raid.wiki.kernel.org/index.php/RAID_setup

you might give the viking iso or puppy a go
https://bjoernvold.com/forum/viewtopic.php?f=11&t=3827

1 Like

Maybe I’m missing something, but it sure does look like /dev/md1 has the array you’re looking for, and it’s showing active.

1 Like

That’s in the QNAP OS. I didn’t paste the Ubuntu stuff because it was a live USB and I wasn’t remoted in.

I’m currently trying to get either ext4magic or extundelete installed/compiled on the qnap using opkg.

This is just a crapshoot of ancient experience. But last time I remember messing with mdadm when you scan it will build the conf according to the raid metadata on the disks.
So it should reassemble an existing array. I dont know how this will fair if a drive is missing.

I was using a mirrored array with two disks.

The fact that he nabbed the config off the qnap box should keep it from rebuilding from disk metadata.

I’ve scratched my head for a solid 20 minutes, and the best I can come up with is “I need more information from the ubuntu box”. If you get around to having an SSH session that you can copy and paste from, feel free to share more details.

The only thing that immediately comes to mind would be, “Is it possible the array did assemble, and the volume group wasn’t activated?”

1 Like

This was the first thing I tried and it just errored out.

I think the issue might be the “Personalities”. It appeared that Ubuntu’s default mdadm did not support RAID 10 which is what this config is.

The Personalities line describes the different RAID levels and configurations that the kernel currently supports.

1 Like

Progress…

Each “Personality” is just a kernel module. You can modprobe them each by name (with the exception of modprobe raid456 which is for 4, 5 and 6 in one.

The main vg still didn’t show up though, so working on that now.

1 Like

fml

lsblk

lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 447.1G 0 disk
├─sda1 8:1 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sda2 8:2 0 517.7M 0 part
│ └─md280 9:280 0 517.7M 0 raid1 [SWAP]
├─sda3 8:3 0 437.7G 0 part
│ └─md2 9:2 0 875.3G 0 raid0
│ └─vg2-lv256 252:11 0 866.5G 0 lvm
├─sda4 8:4 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sda5 8:5 0 8G 0 part
sdb 8:16 0 447.1G 0 disk
├─sdb1 8:17 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sdb2 8:18 0 517.7M 0 part
│ └─md280 9:280 0 517.7M 0 raid1 [SWAP]
├─sdb3 8:19 0 437.7G 0 part
│ └─md2 9:2 0 875.3G 0 raid0
│ └─vg2-lv256 252:11 0 866.5G 0 lvm
├─sdb4 8:20 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sdb5 8:21 0 8G 0 part
sdc 8:32 0 5.5T 0 disk
├─sdc1 8:33 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sdc2 8:34 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1 [SWAP]
├─sdc3 8:35 0 5.5T 0 part
│ └─md1 9:1 0 38.1T 0 raid10
├─sdc4 8:36 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sdc5 8:37 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1 [SWAP]
sdd 8:48 0 5.5T 0 disk
├─sdd1 8:49 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sdd2 8:50 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1 [SWAP]
├─sdd3 8:51 0 5.5T 0 part
│ └─md1 9:1 0 38.1T 0 raid10
├─sdd4 8:52 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sdd5 8:53 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1 [SWAP]
sde 8:64 0 5.5T 0 disk
├─sde1 8:65 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sde2 8:66 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1 [SWAP]
├─sde3 8:67 0 5.5T 0 part
│ └─md1 9:1 0 38.1T 0 raid10
├─sde4 8:68 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sde5 8:69 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1 [SWAP]
sdf 8:80 0 5.5T 0 disk
├─sdf1 8:81 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sdf2 8:82 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1 [SWAP]
├─sdf3 8:83 0 5.5T 0 part
│ └─md1 9:1 0 38.1T 0 raid10
├─sdf4 8:84 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sdf5 8:85 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1 [SWAP]
sdg 8:96 0 5.5T 0 disk
├─sdg1 8:97 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sdg2 8:98 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1 [SWAP]
├─sdg3 8:99 0 5.5T 0 part
│ └─md1 9:1 0 38.1T 0 raid10
├─sdg4 8:100 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sdg5 8:101 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1 [SWAP]
sdh 8:112 0 5.5T 0 disk
├─sdh1 8:113 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sdh2 8:114 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1 [SWAP]
├─sdh3 8:115 0 5.5T 0 part
│ └─md1 9:1 0 38.1T 0 raid10
├─sdh4 8:116 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sdh5 8:117 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1 [SWAP]
sdi 8:128 0 5.5T 0 disk
├─sdi1 8:129 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sdi2 8:130 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1 [SWAP]
├─sdi3 8:131 0 5.5T 0 part
│ └─md3 9:3 0 10.9T 0 raid0
├─sdi4 8:132 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sdi5 8:133 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1 [SWAP]
sdj 8:144 0 5.5T 0 disk
├─sdj1 8:145 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sdj2 8:146 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1 [SWAP]
├─sdj3 8:147 0 5.5T 0 part
│ └─md3 9:3 0 10.9T 0 raid0
├─sdj4 8:148 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sdj5 8:149 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1 [SWAP]
sdk 8:160 0 5.5T 0 disk
├─sdk1 8:161 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sdk2 8:162 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1 [SWAP]
├─sdk3 8:163 0 5.5T 0 part
│ └─md1 9:1 0 38.1T 0 raid10
├─sdk4 8:164 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sdk5 8:165 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1 [SWAP]
sdl 8:176 0 5.5T 0 disk
├─sdl1 8:177 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sdl2 8:178 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1 [SWAP]
├─sdl3 8:179 0 5.5T 0 part
│ └─md1 9:1 0 38.1T 0 raid10
├─sdl4 8:180 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sdl5 8:181 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1 [SWAP]
sdm 8:192 0 5.5T 0 disk
├─sdm1 8:193 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sdm2 8:194 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1 [SWAP]
├─sdm3 8:195 0 5.5T 0 part
│ └─md1 9:1 0 38.1T 0 raid10
├─sdm4 8:196 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sdm5 8:197 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1 [SWAP]
sdn 8:208 0 5.5T 0 disk
├─sdn1 8:209 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sdn2 8:210 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1 [SWAP]
├─sdn3 8:211 0 5.5T 0 part
│ └─md1 9:1 0 38.1T 0 raid10
├─sdn4 8:212 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sdn5 8:213 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1 [SWAP]
sdo 8:224 0 5.5T 0 disk
├─sdo1 8:225 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sdo2 8:226 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1 [SWAP]
├─sdo3 8:227 0 5.5T 0 part
│ └─md1 9:1 0 38.1T 0 raid10
├─sdo4 8:228 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sdo5 8:229 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1 [SWAP]
sdp 8:240 0 5.5T 0 disk
├─sdp1 8:241 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sdp2 8:242 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1 [SWAP]
├─sdp3 8:243 0 5.5T 0 part
│ └─md1 9:1 0 38.1T 0 raid10
├─sdp4 8:244 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sdp5 8:245 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1 [SWAP]
sdq 65:0 0 5.5T 0 disk
├─sdq1 65:1 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sdq2 65:2 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1 [SWAP]
├─sdq3 65:3 0 5.5T 0 part
│ └─md1 9:1 0 38.1T 0 raid10
├─sdq4 65:4 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sdq5 65:5 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1 [SWAP]
sdr 65:16 0 5.5T 0 disk
├─sdr1 65:17 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1 /mnt/HDA_ROOT
├─sdr2 65:18 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1 [SWAP]
├─sdr3 65:19 0 5.5T 0 part
│ └─md1 9:1 0 38.1T 0 raid10
├─sdr4 65:20 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1 /mnt/ext
└─sdr5 65:21 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1 [SWAP]
sds 65:32 1 492M 0 disk
├─sds1 65:33 1 2.1M 0 part
├─sds2 65:34 1 236.6M 0 part
├─sds3 65:35 1 236.6M 0 part
├─sds4 65:36 1 1K 0 part
├─sds5 65:37 1 8.1M 0 part
└─sds6 65:38 1 8.5M 0 part
sdt 65:48 0 5.5T 0 disk
└─sdt1 65:49 0 5.5T 0 part /share/external/DEV3306_1
sdu 65:64 1 29.3G 0 disk /share/external/DEV3304_-1
├─sdu1 65:65 1 1.8G 0 part
└─sdu2 65:66 1 2.3M 0 part
drbd1 147:1 0 38.1T 0 disk
├─vg1-tp1_tmeta 252:0 0 64G 0 lvm
│ └─vg1-tp1-tpool 252:5 0 38T 0 lvm
│ ├─vg1-tp1 252:6 0 38T 0 lvm
│ └─vg1-lv1 252:7 0 28T 0 lvm
│ └─cachedev1 252:9 0 28T 0 dbb87555 /share/CACHEDEV1_DATA
├─vg1-tp1_tierdata_2_fcorig 252:3 0 38T 0 lvm
│ └─vg1-tp1_tierdata_2 252:4 0 38T 0 lvm
│ └─vg1-tp1-tpool 252:5 0 38T 0 lvm
│ ├─vg1-tp1 252:6 0 38T 0 lvm
│ └─vg1-lv1 252:7 0 28T 0 lvm
│ └─cachedev1 252:9 0 28T 0 dbb87555 /share/CACHEDEV1_DATA
└─vg1-lv1312 252:8 0 3.9G 0 lvm
drbd3 147:3 0 10.9T 0 disk
├─vg3-tp3_tmeta 252:13 0 64G 0 lvm
│ └─vg3-tp3-tpool 252:18 0 10.7T 0 lvm
│ ├─vg3-tp3 252:19 0 10.7T 0 lvm
│ └─vg3-lv2 252:20 0 10.7T 0 lvm
│ └─cachedev2 252:10 0 10.7T 0 c198f52f /share/CACHEDEV2_DATA
└─vg3-tp3_tierdata_2_fcorig 252:16 0 10.7T 0 lvm
└─vg3-tp3_tierdata_2 252:17 0 10.7T 0 lvm
└─vg3-tp3-tpool 252:18 0 10.7T 0 lvm
├─vg3-tp3 252:19 0 10.7T 0 lvm
└─vg3-lv2 252:20 0 10.7T 0 lvm
└─cachedev2 252:10 0 10.7T 0 c198f52f /share/CACHEDEV2_DATA
vg1-tp1_tierdata_0 252:1 0 4M 0 lvm
└─vg1-tp1-tpool 252:5 0 38T 0 lvm
├─vg1-tp1 252:6 0 38T 0 lvm
└─vg1-lv1 252:7 0 28T 0 lvm
└─cachedev1 252:9 0 28T 0 dbb87555 /share/CACHEDEV1_DATA
vg1-tp1_tierdata_1 252:2 0 4M 0 lvm
└─vg1-tp1-tpool 252:5 0 38T 0 lvm
├─vg1-tp1 252:6 0 38T 0 lvm
└─vg1-lv1 252:7 0 28T 0 lvm
└─cachedev1 252:9 0 28T 0 dbb87555 /share/CACHEDEV1_DATA
vg3-tp3_tierdata_0 252:14 0 4M 0 lvm
└─vg3-tp3-tpool 252:18 0 10.7T 0 lvm
├─vg3-tp3 252:19 0 10.7T 0 lvm
└─vg3-lv2 252:20 0 10.7T 0 lvm
└─cachedev2 252:10 0 10.7T 0 c198f52f /share/CACHEDEV2_DATA
vg3-tp3_tierdata_1 252:15 0 4M 0 lvm
└─vg3-tp3-tpool 252:18 0 10.7T 0 lvm
├─vg3-tp3 252:19 0 10.7T 0 lvm
└─vg3-lv2 252:20 0 10.7T 0 lvm
└─cachedev2 252:10 0 10.7T 0 c198f52f /share/CACHEDEV2_DATA

better you than me.

1 Like

Ok, back to not all md arrays working. For the main array (md256), Ubuntu thinks it’s made up of 12 spares and 0 disks… idk what to do with that. It’s a 14 drive RAID 10.

Personalities : [linear] [multipath] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md3 : active raid0 sdi3[0] sdj3[1]
      11701135360 blocks super 1.0 512k chunks
      
md2 : active raid0 sdb3[0] sda3[1]
      917792768 blocks super 1.0 512k chunks
      
md1 : active raid10 sdc3[0] sdo3[14] sdh3[12] sdg3[11] sdr3[10] sdq3[9] sdp3[8] sdn3[7] sdm3[6] sdl3[5] sdk3[4] sdf3[3] sde3[2] sdd3[1]
      40953970176 blocks super 1.0 512K chunks 2 near-copies [14/14] [UUUUUUUUUUUUUU]
      
md256 : inactive sdf2[3](S) sdo2[8](S) sdk2[4](S) sdh2[13](S) sdr2[11](S) sdg2[12](S) sdl2[5](S) sdq2[10](S) sde2[2](S) sdm2[6](S) sdn2[7](S) sdp2[9](S)
      6361488 blocks super 1.0
       
md13 : active raid1 sdc4[0] sdj4[27] sdi4[14] sdb4[25] sda4[24] sdh4[13] sdg4[12] sdr4[11] sdq4[10] sdp4[9] sdo4[26] sdn4[7] sdm4[6] sdl4[5] sdk4[4] sdf4[3] sde4[2] sdd4[1]
      458880 blocks super 1.0 [24/18] [UUUUUUUUUUUUUUUUUU______]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sdc1[0] sdj1[27] sdi1[14] sdb1[25] sda1[24] sdh1[13] sdg1[12] sdr1[11] sdq1[10] sdp1[9] sdo1[26] sdn1[7] sdm1[6] sdl1[5] sdk1[4] sdf1[3] sde1[2] sdd1[1]
      530048 blocks super 1.0 [24/18] [UUUUUUUUUUUUUUUUUU______]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

Hmmm…

1 Like

Oh god, wtf is drbd?

What is the total usable space of the md device?

Do you have spare drives to use for testing?

if it was me, I’d dd each disk partition that you are trying to recover somewhere else as a backup.

then try only using a set of the raid 1’s disks from the raid 10… and trying to rebuilt it to see if it would run in a degraded state.

I would absolutely do that if it was viable. The array is about 30TB. I can’t imagine how long that would take to dd, and there’s no hardware for it anyway.

total?? … so that is 14 x 2TB drives or 14 x 4TB drives?

You should only need half that since its raid 10 to use for scratch disks

It’s a RAID10 of 14 6TB drives. The LV is 30tb. We left room for some overhead. QNAP uses a snapshot for rsync backups so you want some play there and it’s always a good idea to have some extra space…

Update: Apparently, reconstructing QNAP’s md/lvm stack is legitimately difficult. I just got a $65k quote for a 4-6 week timeline only because they couldn’t sort out the md/lvm which is completely functional and fine in the QTS OS.

At this point, it looks like dd’ing the whole EXT4 fs to an image file on another NAS is the most viable option… that sucks though.

Anyone know any good recovery services?

1 Like

I think you’re almost there.

Have you tried checking the event count on each of the partitions mdadm -E /dev/sdf2 /dev/sdo2 ... and then trying to run an assemble with just the disks that have matching event counts?

1 Like

04

ETA 20 hours…

Additionally, I am here waiting for a DHL courier delivery of 8 drives from Newegg that is supposed to arrive by midnight. These drives need to be added to the destination NAS before the transfer reaches 10TB.

…aaaaaaand the drives are no show.

Heads up, “local express” next day shipping from Newegg is bullshit. I’m fucked.

sorry to hear that man

1 Like