Truenas replication

Hi All,

I have a question relating to ZFS replication.

My friend and I both have our independent Truenas servers.

We would like to set up a replication / sync task between them, where they are the backup target for my storage pool, and I and the replication target for their storage pool. Essentially giving us an off-site, Geo-graphically separate data backup.

On my pool I have it constructed like below:

Pool
 - Dataset 1
 - Dataset 2
 -  - Sub-Dataset 1
 -  - Sub-Dataset 2
 - Dataset 3
 - Dataset 4
 - Friends Dataset storage <- This is where their data will be backing up to in my server

And my friend has their pool like below:

Pool
 - Dataset A
 - Dataset B
 -  - Sub-Dataset A
 -  - Sub-Dataset B
 - Dataset C
 - Dataset D
 - My Dataset storage <- This is where my data will be backing up to in their server

I am wanting to know what the best advised approach to this is? we have separate snapshot tasks configured for each dataset as needed, as well as encryption on the pools at the top level. Ideally if it can be so that each of out data backups have separate encryption, that would be great so that neither side can access the data for security.

Ideally, we want to do a complete backup of the pool (sans the other parties backup dataset) to the backup dataset on the other host.

As these are both in production units, and resources are not available to sandbox this for me to test it out, I am reaching out to get some assistance on doing it correctly from the start.

It would also be good if we can have the sync run through a NON root user, and we can create specific named accounts on each end for this. We have tried creating them and loading in the SSH key pairs etc… However, it never seems to accept the keys for the user, and always wants a password. So perhaps I’m doing something wrong there.

Thanks for reading all. Looking forward to reading your replies.

Jason.

Easy way,
Export a block device to the other party.

On receiving the block device put a file system on it with any encryption.

The file system belongs to the receiving party so they have root and all keys.

On the host computer it is a single large file owned by a regular user.

To make life easier for each of you, make a new pool with new vdevs with local redundancy. In the case of a drive failure a resilver will run much faster locally than over the internet.

On the client side it looks like a jbod, as all of the redundancy is transparent, block devices can be added to the client pool to expand their space.

Also look into zfs send and receive. Btw that is only effective if your virtual memory partitions are not in the pool with your data.