Migrating an existing ZFS pool to dual actuator Exos drives?

I was looking to upgrade my home NAS from 4TB to 12TB IronWolf’s but realized some of the refreshed dual actuator Exos drives are cheaper for more capacity which is pretty wild! I need to look at power requirements but I think that’s a good option. I watched the vids and read the article here (How to ZFS on Dual-Actuator Mach2 drives from Seagate without Worry) but I have an existing zpool that I want to swap drives out and over to the Exos drives.

I have an existing RAID10 of 4x 4TB drives:

pool: storage
 state: ONLINE
  scan: resilvered 2.60T in 06:33:10 with 0 errors on Fri Mar 22 01:14:50 2024
config:

	NAME                        STATE     READ WRITE CKSUM
	storage                     ONLINE       0     0     0
	  mirror-0                  ONLINE       0     0     0
	    wwn-0x5000c5009c706d11  ONLINE       0     0     0
	    wwn-0x5000c500799d5553  ONLINE       0     0     0
	  mirror-1                  ONLINE       0     0     0
	    wwn-0x5000c500799d3e43  ONLINE       0     0     0
	    wwn-0x5000c500799d0a7b  ONLINE       0     0     0
	spares
	  sdf                     

In the two drive scenario, I think I would add in one of the Exos drives, partition it, then swap my existing spare (sdf) for one of the partitions, then manually fail a drive on a mirror. Once it’s resilvered, I would then do the other mirror. Then follow the same procedure for the 2nd Exos drive.

But what if I wanted to add all 4? I would guess I would add 2 more mirrors? Once I do, am I able to rebalance existing data so I can get the striped benefits? The original goal was to try to do this one by one, but I’m also looking at getting a SAS card. I already have one but it’s external only (for my used LTO3 tape drive because I <3 tapes). I see LSI makes a 4 ext / 4 int combo card which would allow me to add drives while juggling the whole thing as an option though there would not be any room for a hot spare if I went with 4 active drives in my ZFS setup as I have now.

Secondarily, would it be wise to consider some sort of raidz? I’ve always preferred RAID10 / mirror-stripe solutions as I still have battle scars from trying to support RDBMs on horrible RAID5 arrays.

I had to bust out a spreadsheet to figure this out, but I think I was on the right track in how to do a 4 (dual-actuator) drive RAID10 equiv and came up with:

Stripe 1: 1-1, 1-2, 2-1, 2-2
Stripe 2: 3-1, 3-2, 4-1, 4-2

Then mirroring stripe 1 and 2. In ZFS I seem to only find cases where you can stripe mirrors, so I think it would actually be more like:

Mirror 1: 1-1, 2-1
Mirror 2: 1-2, 2-2
Mirror 3: 3-1, 4-1
Mirror 4: 3-2, 4-2

So say if I lost drive 1, mirrors 1 and 2 would be affected. Which brings up one potential risk. The latest Seagate Exos doesn’t have dual actuator. I assume they’re coming but replacing a dual actuator failure with a non-dual actuator would require partitioning (or adding 2 drives) to work around it.

Where are you sourcing your drives? I need to add some storage to my truenas zfs install, and these dual-actuator things look nice!

They have a number of refurbished ones on NewEgg for a nice price. When I last looked they had both the larger ones in SATA and 12TB SAS ones. I opted for the 12TB as it was a good opportunity to level up my SAS game. The SAS versions seem to make things a bit easier as the single drive is presented as two (no need to partition as you seem to have to do for the SATA ones).

As far as the ZFS migration, mission accomplished! Here’s where things are now:

  pool: storage
 state: ONLINE
  scan: resilvered 2.75T in 10:02:30 with 0 errors on Thu Apr  4 08:05:08 2024
config:

	NAME                                        STATE     READ WRITE CKSUM
	storage                                     ONLINE       0     0     0
	  mirror-0                                  ONLINE       0     0     0
	    wwn-0x6000c500da92f6d30000000000000000  ONLINE       0     0     0
	    wwn-0x6000c500da939d970000000000000000  ONLINE       0     0     0
	  mirror-1                                  ONLINE       0     0     0
	    wwn-0x6000c500da92f6d30001000000000000  ONLINE       0     0     0
	    wwn-0x6000c500da939d970001000000000000  ONLINE       0     0     0

One nice thing I hadn’t thought of, because the WWN’s have that single 1 for the 2nd actuator, it stands out more and helps avoid oopsies and makes things a bit easier to organize. mirror-0 ended up being the first actuators of both drives and mirror-1 was the 2nd. If/when I add another 2 drives, I will likely follow the same approach. The WWN’s will also help when needing to swap a failed drive, noting a drive failure can affect both mirrors.

To convert over to these, I did a number of zpool replace’s one at a time. Since I already had 4 drives, that made things easy since, from ZFS’s view, I still have 4 drives.

The last resilver finished sometime this morning so I haven’t had a chance to otherwise test things out more than normal use but so far so good! I’ve got more space and 2 fewer drives so a net power savings. Or if I want to get 2 more I can have way more space as well.

1 Like