Return to

[Help] Best practices for mdadm, mdraid, LVM, and Btrfs? On Linux

So on my journey to build a new file server. I wanted something with the ability to add new disks with ease. Like Synology/Unraid.
However unlike Unraid i want more IOPS.
Freenas is great, but i don’t want to add a set of drives just because i need a few extra TB of space.
I currently have a Synology box that has been great for 4 years. However i’m moving over to 10GB and i really can’t justify the cost of their Rackmount solutions.

I would like to know if anyone can create a good guide, or has any links to good guides. To use the following. MDADM/MDRAID with LVM and BTRFS?
Something like how Synology has this implemented would be awesome!

if you’re wanting software-defined storage (no RAID accelerators or HBAs), i recommend looking into ZFS over BTRFS.
i haven’t gone down the ZFS rabbithole yet, as i dont have enough CPU horsepower to compete with my current hardware-RAID5 setup, but it looks to be very easy to have ZFS just add /dev/device to an existing filesystem and lose NO data at all.

ZFS is very powerful, but adding individual devices isn’t really easy. Once you have a VDEV, you can’t add or remove individual disks from it - you’d need to recreate it entirely. You can add new VDEVs, but in this situation you’d be adding one VDEV for each disk, and that limits your flexibility, as by default ZFS will stripe across VDEVs. I don’t really know of a good solution where you can just add a disk here and there easily. LVM I think can kind of do this but of course you need a filesystem on top of that.

1 Like

OP should just stick with LVM then. You can pop new disks in, create a new physical volume, attach to a volume group, and then expand logical volumes as needed.

If they use Cockpit then they can actually have a nice GUI to do this in with the cockpit-storaged package.

This only works for adding though, not removing. Pretty much no filesystem does removal of disks.


You suggest using LVM, but how would you configured the disks under it for such a configuration?
You wouldn’t do a typical RAID for this?

You don’t have to configure RAID to use multiple disk with LVM.

That said, if you want greater redundancy then what a lot of people do is something like RAID 1 or RAID 10 under the hood then at install time that block storage is presented to the OS to use LVM on top of.

The method with a RAID doesn’t scale horizontally though.

So I guess the question is: what would you prefer, flexibility or redundancy?

1 Like

Hardware RAID has seriously fallen out of favor.

LVM will do software RAID for you, as well as allowing you to extend across more drives later:

But note that the flexibility means you won’t be getting extra IOPS on your existing volume when you add a couple more drives. Whereas if you wiped the drives and stripped it at all together as one big new volume, you would. Only pointing this out because your first post mentioned IOPS.

1 Like