I have a question about the best way to migrate and backup my data. I’ve been looking around and actually found a lot of answers, but they differ (even contradict) themselves and can’t say 100% they match my use case.
So, here’s what I’m doing:
- PC1 (Ubuntu Mate 20) has pool A (1 raidz1 vdev). It’s smb-shared to my LAN.
- PC2 (Ybuntu Mate 22) just built, about to create pool B (1 raidz2 vdev, much bigger)
- I want to copy everything from pool A to pool B, so that having access to either PC1 or PC2 enables network users to read the data (the same way they currently read pool A). At that point, PC2 will substitute PC1 as my always-on home server.
- After it’s done, I want to sporadically update pool A from pool B. Remember, PC2 wil be the new main server, while PC1 will be turned on and off discretionally. All life will be happening in PC2-pool B, PC1-pool A will just be a backup, but I’d want it to be as readable (so when PC1 is it can save the network trip, for instance, but also in case PC2 would be down in some emergency).
So my questions:
- What’s the best way to copy all data from pool A to pool B so as to replace PC1 with PC2 as server afterwards?
- What’s the best way to do the sporadic updates on the copy that’s left on pool A?
- Bonus: considering PC1 will now be mostly switched off, is there a way to automatically start the computer, run the update, and shut it down again after it’s finished, so I can move from sporadic to periodic updates?
For (1) I’ve mostly found the answer is zfs snap then zfs send/receive, but it wasn’t clear to me whether this will just backup a snapshot to be restored later or whether it will create an identical, “live” dataset in pool B I can immediately share via smb and use as I was using pool A.
For (2), I found multiple solutions, third-party tools (more ZFS snap+send, rsync, syncoid, unison…), with people describing all sorts of successes an failures. Again, I’m confused about which tool best serves my purpose here.
Thanks in advance for any answers!