Extending my FreeNAS backup solution, adding a 3rd server?

Hi all,

If you’ve been following along, you’ve probably seen my last post where I added a dedicated backup (Supermicro) server, to my FreeNAS storage solution.

Here’s the current config:

  • Primary: FreeNAS host in a Norco case, 24-bays.
  • Backup (1): FreeNAS in a Supermicro chassis with 24-bays.

At the moment I am running 8x12TB WD UltraStar drives in the big-backup pool. I also threw in all my legacy WD Red (CMR)/Red Pro/ Red&Red Pro RMA drives into a 2xvdev 16x drive pool called big-deadpool.

My plan is to strip out those 16 drives with a new batch of 8x12TB WD drives 2x 4-pack from B&H (WD 12TB Ultrastar 7200 rpm SATA 3.5" Internal Data Center HDD).

Should I also get another Supermicro chassis (pretty much identical to the existing solution, 2nd gen Scalable Xeon, ECC RAM, identical Mainboard & Chassis, matching LSI-HBA card etc.) or just stick with the locally replicated secondary “backup” array?

FYI - I’m also running critical data into Backblaze, but given the volumes I’m dealing with none of my media/plex content is cloud-backed up.

Thoughts?

Thanks!

My main TrueNAS machine is both a backup for the Windows machines in my house and backup Plex server for my primary cloud-based Plex server. It has a tertiary function as PieHole VM host and Unifi controller. This box has a single Z2 array of 8 disks and a hot spare in a pair of 3x5.25" Bay to 5x3.5" bay converter cages.

When I decided to build a second TrueNAS machine to “back up” this backup box via a 10Gb link to my detached garage for “off site” backups I never once considered making the second box as complex as the first box. It’s a used Dell business tower PC with an i5 cpu and it’s pool is a striped pair of non-redundant 8TB HDDs which adds up to the same size as the entire array on the main TrueNAS server.

So I guess my question is why are you planning a 3rd redundant TrueNAS server and why is your 2nd backup so complex ~ my opinion is that some simplification is called for here.

Unless of course this is simply your hobby and you enjoy it so you’re making so complicated in order to “have fun”

edit - my critical data is on BackBlaze as well.

2 Likes

If you don’t need an additional machine for compute, then I think you’re good with two. You could have a separate offline backup in case of ransomware or something, but if Something Very Bad happens(fire, flood, hit by meteor) that’s bad enough to take out both machines at once, then it’s probably bad enough to take out three also. That’s what your backblaze backups are for.

3 Likes

Great point and to address @Jkay - this is something I’ve been pondering. If I have upstream intentional corruption of data, the primary will back that up to the existing backup and my recovery window will be the configured lifespan for the replication backups.

“backup1” can of course be configured to have a longer retention lifespan, but I can still perform this in the 2nd chassis.

@jkay makes a valid point that there isn’t an immediate need to make this more complex than it already is and therefore I’ve gone ahead and backordered 8x 12TB WD UltraStar 7200rpm drives, to replace the current “deadpool” of 16x 4TB WD drives.

I’ll comment on backblaze in a follow up post.

1 Like

Right, I am not protected against electrical death, that’s for sure. Of course, my chassis right now has 3x APC 3kva UPSes and I’m pretty much relying on these to keep my equipment safe.

The backup, Supermicro chassis has dual-server style redundant PSUs.

Regarding backblaze - if I were to push all my data across,

  1. There’s no way I can get an upload data-cap to make this feasible.
  2. Even if that was possible, the cost to host 20-30TB of data in backblaze is eye watering.

So I’d rather offset the cost of cloud hosting and maintain a 2nd backup locally.

Just to clarify some terminology by @Jkay - I do not consider my primary node a “backup”. The only backup is the second FreeNAS host (the Supermicro box).

Nope. Managing two-boxes alone is enough work. I maintain:

  • Detailed postmortem reports in Notion for every drive that “faults” in FreeNAS
  • Each drive that “dies” is put through a complete burn in process.
  • Logs of each burn in report are pushed into a git repo
  • Chassis location of current drives are maintained in Notion
  • Daily review of disk reports from FreeNAS reviewed daily at 8am and 8pm.
  • Replacing drives and updating all notes (above) as they fault/die.
1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.