BCache, SoftRaid Linux Media Server?

I have a Dell Poweredge T320 server with a PERC H310 flashed to IT mode as well as a H710 stock (not installed). 32GB ECC, 8x 4TB WD Reds, 2x 256GB Adata SX8200 Pro NVMe SSDs (For Caching Drives) and a Intel Quad Gigabit NIC.

I want to have Read and Write caches for the spinning rust. It is so that the Media can cache when reading back for streaming.

Ive been giving FreeNAS a go but im rather dissatisfied. I am interested in BCache + some form of RAID. Does anyone have experience with this sort of thing? Is it also possible to setup something similar to how FreeNAS does the jail IP multiplexing? Id like to do something like that with Docker.

ZFS is the best option for your use case, what about FreeNAS were you dissatisfied with? ZFS works great on Ubuntu, btw.

I don’t know if you can do it in the same way that Jails do, but you can assign IPs to docker containers, yes.

FreeNAS jails keep going offline, the permissions system is messed up compared to Linux. Jails have users and groups that dont crossover and you have to do some magic to allow different jails to read/write/execute the same datasets. For example, my HD Homerun records an episode, Sonarr would pull it in, rename it, add cover art etc and place it in the TV Shows dataset and sort it. This doesnt happen or rarely happens on FreeNAS.

I’m not a huge fan of ZFS. For all of it’s benefits, I find it overly complicated.

My home “server” uses mdadm using raid10, far2 on SATA rust, with a bcache cache SSD.

I use KVM to spin up virtual machines and keep things isolated. I prefer the simplistic approach of a bunch of smaller, commodity “systems” to a monolithic NAS that manages access controls and the like.

2 Likes

Do you have a guide I can follow for mdadm and bcache?
I have 8 4TB WD Reds, 2 ADATA SX8200 Pro NVMe SSDs (for caching) and a Kingston A400 SSD for the OS.

Not in one concise place, but the fact that it’s kinda simple is the point.

For creating the mdadm arrays:

https://wiki.archlinux.org/index.php/RAID#Example_3._RAID10,far2

For creating the bcache:

https://wiki.archlinux.org/index.php/Bcache#Installation_to_a_bcache_device

It doesn’t use any special Arch magic, so those commands work fine on any distro.

Each step is basically one command. Figuring out how you want to organize your storage space is the hard part.

How do you recommend organizing space?

That depends on your needs and your physical storage.

I don’t trust parity raid, especially on larger disks. I like mirrors.

I’d take each pair of rust drives and mirror them using raid10, far 2. Then make them LVM physical volumes and split up whatever you need out as LVM logical volumes.

Use the bcache cache on whatever logical volumes you’d want to benefit from it.

For me, that varies over time.

what is RAID 10 FAR 2? especially seeing as by any standard I can find, RAID 10 requires at least 4 drives

and with 8 drives, how many could fail and not lose my array?

http://www.ilsistemista.net/index.php/linux-a-unix/35-linux-software-raid-10-layouts-performance-near-far-and-offset-benchmark-analysis.html?start=5

There are many ways you might do this. I like running proxmox, as it is mainly a VM focussed machine, then using ZFS to export some NFS shares for storing my files and backing up to. Bcache is an attractive technology, especially given it’s scalabiltiy compared to something L2ARC and ZIL (ZIL is technically just journalling, but lets ignore that and focus on the impact on write performance).

For this sort of application I have an SSD-Pool that I use for OS images, and then a seperate hard disk pool for cold storage. For Windows clients, I have a VM running OpenMedia Vault that mount NFS shares from the host and then re-hosts with SMB. It sounds complex, but storing backups on the pool rather than on an image is much more comfortable I think.

I’ve also heard good things about UnRAID, but I don’t want to pay for software

On my main machine I will probably experiment with Bcache, as I am not bottlenecked by GbE and as I want to use software in the Kernel where possible instead of being tied to a ZFS Kernel Module.

I find FreeNAS to be pretty average and inflexible, but quite like zfs. It provides good integration between Filesystem and LVM, especially with thin provisioning, snapshots and checksumming. On my server I run proxmox on ZFS, then run small VM’s for different tasks (SMB, etc). The proxmox server hosts NFS shares so I get bare-metal performance.

But, you don’t have to use zfs. On my main machine I use mdadm + XFS. I think Bcache has a lot of potential, and plan to use it in a future rebuild (probably soon). I am curious how bcache holds up when you are using it with a big (~1TB) cache drive instead of a slower array of SMR disks.

Anyway, the fun with this stuff is building it and finding out. It might not work great the firs time, but that just means you can rethink it and make ti better next time.