I have 4 (3TB) drives used for backups inside my PC case that I want to migrate to a synology NAS bay. I was wondering if the drives could be migrated over one at a time by disabling RAID. Here is the output from btrfs filesystem usage :
Can anyone advise on how to tackle this? I would like to know if it’s possible to move the drives over. Obviously, this is easily solved with buying more drives to fill out the NAS bay, but this is a homelab setup, so I’m doing things on the cheap.
They are all linked together. You can only remove a disk by using btrfs device delete, but it copies the data over to the remaining disks and obviously only works as long as there is free space. But even a conversion to JBOD without RAID, which is possible with BTRFS, won’t allow for single disk removal.
Or you do a backup of the array, so breaking the array is safe and let’s you do whatever you want with the 4 disks.
You can probably move the array to your NAS and the volume should be visible and mountable then. Why is migration by one disk at a time necessary? That’s very unusual.
Afaik Synology NAS don’t accept already formated drives internally, only over USB.
The NAS will want to format the drives when you insert them and intend to use them.
In btrfs you have system extents, metadata extents and data extents and you can set different filters to instruct btrfs to balance things.
-dconvert raid5 will apply a filter on all data extents to rewrite them to raid5.
-dconvert raid0 will remove redundancy.
You can’t do raid1 with two disks, since you have 4TB of data and your disks are 3T each. But you can still do raid1 for system and metadata and it’s worth doing IMHO.
I think everyone has accidentally lost data before, and this situation is no different. Which is why I am extra careful about the exact commands I run. I appreciate everyone’s helpfulness.
Most drives would have a narrow barcode sticker opposite the connector - what you read there would match one of the serials in /dev/disk/… or one of the serials returned by hdparm.
On that topic, you can send the disks to sleep and/or try a short smartctl test with your finger touching the drive for changes on vibration levels.
There’s other ways too, some older drives had small LEDs on them, your 3T disks might have them
Are you powering down the system or hot-(un)plugging?
(gets easier if you’re powering it down…, there’s more other options…)
I prefer /dev/disk/by-id for all my drives. I also have labels on my bays so I don’t need to pull them out to see their id.
But I guess with at least one disk of redundancy, you can probably just do try and error with single disks and check btrfs data to see what’s missing.