Backup/Restore to create a ZFS volume suggestions?

Hopefully this is in the correct section, but it’s hardware related so …

I have an archive server with (now) 5x8TB drives in it that are not RAID in any way. I would like to move all of the data into a ZFS volume, but I can’t figure out a cheap way to clone the data so that I can create a volume from the disks. Current total data usage: 23TB, disk #5 is empty at the moment.

The chassis will only hold 3 more drives, so just buying duplicate drives isn’t really a simple option. I also can’t afford to just buy more drives of sufficient capacity.

So my question is: Anyone have any suggestions on how to backup 23TB of data temporarily until a volume can be created?

  • Internet speed availability: 10Mbps Up, 50 Mbps Down (Cloud is not really an option)
  • Network card: 1Gbps (so network copy is going to be slow, it’s a long term archive server, so I’ve never needed high speeds, and I don’t care if it takes days to clone, it’s on a big UPS)
  • I will need to do something to eventually increase the volume size as I archive more and move data, so this process may need to be repeated every year or so.
  • “live” server has ~10TB @60% full, so no real help there

Am I just screwed because I don’t have the $$$ to do it correctly?


Are they separate filesystems? There is a way using your existing disks, but it’s risky, messy and prone to mistakes, and an entire disk will be unavailable for a period of time while it is in progress.

  1. So lets say you have those 5 disks, one large partition on each, each with its own filesystem, and 6TB of files allocated on each filesystem for disks 1/2/3/4, 2TB free (you might need to rebalance the files to make this happen). Disk 5 has one partition the size of the disk, empty.

  2. You shrink the filesystem on disks 1-4 as far as possible, and create a new partition in the free space at the end of the disk. Create a ZFS pool, a RAIDZ1 vdev using the 4 new partitions, and the existing partition on disk 5 (not in use yet). The maximum capacity of the vdev will be 4x the smallest partition size. If it’s perfectly balanced, that’s 2TB.

  3. Move files from the existing filesystems on disks 1-4 to the pool, making sure that the free space on disks 1-4 are as balanced as possible (reduces the number of steps overall).

  4. Shrink the filesystems on disk 1-4 again, and shrink the partitions for those filesystems.

  5. Export the ZFS pool, and move the vdev partitions on disks 1-4 so that the start of each one is just after the end of the filesystem partitions.

  6. Import the ZFS pool and grow the vdev (zpool set autoexpand=on, zpool online -e pool vdev). Now you have more space in the pool and can repeat steps 3-6 until all the files are moved, and then the original partitions can be deleted.

If you’re using LVM2 on those disks already, you can create the ZFS pool on LVs, then you don’t need to move the ZFS partitions during the process. It’d result in a less “clean” setup, as you’d then have ZFS-on-LVM2 forever, but performance shouldn’t be impacted much.