Have you run “mdadm --stop /dev/md0” yet?
yes to be able to read the drives and look for backup superblocks
sudo mdadm -A /dev/md0 /dev/sd[bcde]1
mdadm: /dev/sdb1 is busy - skipping
mdadm: Merging with already-assembled /dev/md/0
mdadm: /dev/md/0 assembled from 2 drives and 1 rebuilding - not enough to start the array
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : inactive sdc1[6](S) sde1[4](S) sdd1[7](S) sdb1[5](S)
11720544030 blocks super 1.2
unused devices: <none>
sdb1 looks like your problem with “Events : 27374” which is why I thought you might be able to build it with cde, but you ran bcde.
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=1a63917c:d8c53d78:90f31fa3:3ef66b20 name=wrath:0
# This configuration was auto-generated on Sat, 22 Apr 2017 10:42:46 -0500 by mkconf
i added the “ARRAY /devmd0…”
oh. doing that now
well, I don’t think that would explain the busy error messages.
it did not error the missing drive
sudo mdadm -A /dev/md0 /dev/sd[bcd]1
mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.
sudo mdadm --misc --detail /dev/md0
/dev/md0:
Version : 1.2
Raid Level : raid0
Total Devices : 3
Persistence : Superblock is persistent
State : inactive
Working Devices : 3
Name : wrath:0 (local to host wrath)
UUID : 1a63917c:d8c53d78:90f31fa3:3ef66b20
Events : 28169
Number Major Minor RaidDevice
- 8 49 - /dev/sdd1
- 8 33 - /dev/sdc1
- 8 17 - /dev/sdb1
ummm…that says raid level 0…shouldnt that be a 5…
yep sure should
Might be time for the more drastic steps…ill leave it here when all other options have been exhausted:
mdadm --manage --stop /dev/md0
mdadm --create --assume-clean --level=5 --chunk 512 --raid-devices=4 /dev/md1 /dev/sd[bcde]1
will it wipe or try to work with my data?
would adding my drives to /etc/mdadm/mdadm.conf help?
So it’s the drastic option in that it’ll try and rebuild the array as a new device it SHOULD NOT mess with your data with the --assume-clean flags, but I have had to do a partition recovery with that command before. It’s definitely one of the more scary options.
ill try it, the bulk of the raid is downloads. i only have a few gigs otherwise
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md1 : active raid5 sde1[3] sdd1[2] sdc1[1] sdb1[0]
8790398976 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/22 pages [0KB], 65536KB chunk
unused devices: <none>
looks like you’re back in business. make sure to update your fstab.
sudo mount /dev/md1 /media/raid/
mount: /media/raid: wrong fs type, bad option, bad superblock on /dev/md1, missing codepage or helper program, or other error.
it will not mount the md1