Btrfs questions

I have been toying with the idea of making a raid1 pool of storage on my machine. So to test it out before I do it on my system I played around with btrfs in a virtual machine. The setup was 4 x 16G drives setup with raid1 via.

sudo mkfs.btrfs -m raid1 /dev/sdb /dev/sdc /dev/sdd /dev/sde

Now testing it adding and removing working drive was no problem. When I removed one drive from the system however it fails totally for me. Your meant to be able to mount the btrfs pool in degraded mode but is fails for me. The only way to mount is to use the read only option but then you can not add or remove drives and the data in read only is corrupted. Does it take time for the system to setup raid1 or something ? Some of the terminal output below.

[email protected]:~$ sudo btrfs filesystem show
Label: none uuid: 71a81fb7-7057-4b89-a5db-55419e0ea116
Total devices 1 FS bytes used 4.36GiB
devid 1 size 64.00GiB used 7.04GiB path /dev/sda1

warning, device 4 is missing
warning devid 4 not found already
Label: none uuid: 8d3cd7df-0c37-4e58-9bee-81e8b6b61f53
Total devices 4 FS bytes used 4.36GiB
devid 1 size 16.00GiB used 3.01GiB path /dev/sdb
devid 2 size 16.00GiB used 3.00GiB path /dev/sdc
devid 3 size 16.00GiB used 2.01GiB path /dev/sdd
*** Some devices missing

[email protected]:~$ sudo mount -o degraded /dev/sdb /mnt/pool
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

[email protected]:~$ sudo mount -o degraded,recovery /dev/sdb /mnt/pool
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

[email protected]:~$ sudo mount -o ro,degraded,recovery /dev/sdb /mnt/pool
[email protected]:~$

mount: wrong fs type, bad option, bad superblock on /dev/sdb,

Seems like "mount" is trying to tell you that you have to provide the filesystem type because it can't be determined. Example:

sudo mount /dev/sdb -t btrfs -o degraded /mnt/pool

Since you are dealing with a MIRROR ... usually you cannot mount a broken mirror in "degraded" mode because if either one of the mirrors breaks it is dead and needs to be rebuilt ... you can only mount the working copy of the mirror (i.e. sda+sdb or sdc+sdd ... assuming it is split between the drives this way) and then rebuild once the failed drive (in your case a emulated VHD) of the other side has been fixed/replaced.

You can only mount a partially failed RAID5/6/... as degraded ... for as long as the failed drive isn't replaced - or the replacement hasn't finished rebuilding - the missing data will be calculated on the fly out of the XOR/Parity sets.

Also ... looking at the btrfs manpage ... shouldn't you rather use "-m raid1" and "-d raid1" to mirror metadata and data, or is "-d raid1" automatically assumed when "-m raid1" is given?

Wow now I feel stupid. Im sure I read -m raid1 did both but it does not. Thanks heaps, I can continue messing around with it more. I can recover from a missing drive now :) and the data is intact with a missing disk.

Don't. We're dealing with Linux - where the commandline is mightier than the mouse. ;)
It's quite easy to miss out on some option, especially when you just started toying around with it.

Great you got it going. Keep on tinkering and enjoy the knowledge that comes with it.


Amen !

Btrfs is pretty confusing with the way it handles raids. I think its not setting up a true raid 1 in the since we all understand, but instead is just mirroring the metadata. metadata mirroring is actually enabled by default on hdd's.

@bjay actually hit the nail on the head

mkfs.btrfs -m raid1 -d raid1 /dev/sdb /dev/sdc /dev/sdd /dev/sde etc... would be the way to mirror metadata and data on a series of devices



^--- This.

Exactly the reason why I prefer md(adm) for software-RAID solutions for the time being.
Great documentation and easy to use and true to the concepts of RAID-ism. ;)

I only use btrfs for its COW, subvolumes and snapshotting feature. The documentation about the built-in RAID functionality is as confusing as trying to read Mayan hieroglyphs.

[RANT] I mean ... by definition a RAID1 (Mirror) is a defined set of hard drives being mirrored to a defined set of hard drives in (almost) real-time (i.e. /dev/sda --> /dev/sdb or /dev/sd[ab] --> /dev/sd[cd]), and in case either one side breaks the system continues operation with just the working copy while throwing an error about the failure - which means you have to "break" the mirror, replace the failed drive(s) and then rebuild the mirror. btrfs somehow leads this concept into a dense cloud of smoke where you can't really tell how it is actually mirroring the drives...

I think once Ubuntu (and all the other distro) ships with ZFS... btrfs is done. Not that I really need a Zetabyte filesystem just yet, but ZFS is better documented and several lightyears ahead of btrfs.

But as always... your viewpoint and/or mileage may vary.[/RANT]

1 Like