Migrate btrfs drives to NAS

I have 4 (3TB) drives used for backups inside my PC case that I want to migrate to a synology NAS bay. I was wondering if the drives could be migrated over one at a time by disabling RAID. Here is the output from btrfs filesystem usage :

Overall:
Device size:          10.92TiB
Device allocated:          8.44TiB
Device unallocated:        2.47TiB
Device missing:          0.00B
Used:              8.43TiB
Free (estimated):          1.24TiB  (min: 1.24TiB)
Free (statfs, df):         1.24TiB
Data ratio:               2.00
Metadata ratio:           2.00
Global reserve:      512.00MiB  (used: 0.00B)
Multiple profiles:              no

Data,RAID6: Size:4.21TiB, Used:4.21TiB (99.84%)
/dev/mapper/sdb_crypt      2.11TiB
/dev/mapper/sdc_crypt      2.11TiB
/dev/mapper/sdd_crypt      2.11TiB
/dev/mapper/sde_crypt      2.11TiB

Metadata,RAID6: Size:6.00GiB, Used:5.33GiB (88.87%)
/dev/mapper/sdb_crypt      3.00GiB
/dev/mapper/sdc_crypt      3.00GiB
/dev/mapper/sdd_crypt      3.00GiB
/dev/mapper/sde_crypt      3.00GiB

System,RAID6: Size:64.00MiB, Used:448.00KiB (0.68%)
/dev/mapper/sdb_crypt     32.00MiB
/dev/mapper/sdc_crypt     32.00MiB
/dev/mapper/sdd_crypt     32.00MiB
/dev/mapper/sde_crypt     32.00MiB

Unallocated:
/dev/mapper/sdb_crypt    633.49GiB
/dev/mapper/sdc_crypt    633.49GiB
/dev/mapper/sdd_crypt    633.49GiB
/dev/mapper/sde_crypt    633.49GiB

Can anyone advise on how to tackle this? I would like to know if it’s possible to move the drives over. Obviously, this is easily solved with buying more drives to fill out the NAS bay, but this is a homelab setup, so I’m doing things on the cheap. :slight_smile:

They are all linked together. You can only remove a disk by using btrfs device delete, but it copies the data over to the remaining disks and obviously only works as long as there is free space. But even a conversion to JBOD without RAID, which is possible with BTRFS, won’t allow for single disk removal.

Or you do a backup of the array, so breaking the array is safe and let’s you do whatever you want with the 4 disks.

You can probably move the array to your NAS and the volume should be visible and mountable then. Why is migration by one disk at a time necessary? That’s very unusual.

Afaik Synology NAS don’t accept already formated drives internally, only over USB.
The NAS will want to format the drives when you insert them and intend to use them.

Yes.

You have 4 disks in raid6 and 1.5disks worth of data.

You could:

btrfs balance start -sconvert raid1 -mconvert raid1 -mconvert raid5 /mnt
btrfs device delete /dev/sdd

Both of these will take a while.

Then you can take /dev/sdd to Synology and have it format it out whatever.


https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices

Thanks for the helpful response. I took a look at the link you posted, but have a few questions:

  • Why is there 2 -mconvert flags? Should the second be -dconvert?
  • What should I do when removing the second drive? Perhaps:
btrfs balance start -sconvert raid1 -mconvert raid1 -dconvert raid1 /mnt

Sorry, think-o. Yes, dconvert.

In btrfs you have system extents, metadata extents and data extents and you can set different filters to instruct btrfs to balance things.

-dconvert raid5 will apply a filter on all data extents to rewrite them to raid5.

-dconvert raid0 will remove redundancy.

You can’t do raid1 with two disks, since you have 4TB of data and your disks are 3T each. But you can still do raid1 for system and metadata and it’s worth doing IMHO.

Whatever you do, back up your important stuff first.

Playing with taking disks out of raid as a migration plan is always fraught with danger in the event that you either

  • screw up one of the commands
  • have a drive failure during the process
  • mis-identify a drive during removal, etc.

Thinking “oh of course i won’t i’ll be careful” is how you lose data. Ask me how i know :smiley:

so maybe the commands should be:

btrfs balance start -sconvert raid1 -mconvert raid1 -dconvert raid0 /mnt
btrfs device delete /dev/sde
btrfs device delete /dev/sdd

I think everyone has accidentally lost data before, and this situation is no different. Which is why I am extra careful about the exact commands I run. I appreciate everyone’s helpfulness.

1 Like

A second noob question in this thread: After I have removed the disks from the filesystem, how do I figure out which drives to unplug in my case?

I was just going to go trial and error until I removed the two disks that causes the filesystem to still mount, but open to ideas! :smiley:

Most drives would have a narrow barcode sticker opposite the connector - what you read there would match one of the serials in /dev/disk/… or one of the serials returned by hdparm.

On that topic, you can send the disks to sleep and/or try a short smartctl test with your finger touching the drive for changes on vibration levels.

There’s other ways too, some older drives had small LEDs on them, your 3T disks might have them


Are you powering down the system or hot-(un)plugging?

(gets easier if you’re powering it down…, there’s more other options…)

To get the devices in the btrfs system:

btrfs filesystem show /mnt/mount

Then one of the following to get the serial number:

smartctl -a /dev/sdX | grep Serial
ls -l /dev/disk/by-id/* | grep sdX

The “/dev/disk/by-path” directory can also be useful if you’re using something with hot swap bays, as the path to each bay is fixed.

1 Like

I prefer /dev/disk/by-id for all my drives. I also have labels on my bays so I don’t need to pull them out to see their id.

But I guess with at least one disk of redundancy, you can probably just do try and error with single disks and check btrfs data to see what’s missing.

1 Like