Hey everyone, I built out a TrueNAS system based off a Supermicro rackmount server over a year ago. Things have been going well and I have been using it daily since.
Specs:
Dual Xeon E5-2650 v2
64GB of memory
1 6TB vdev/pool for bulk storage and files
1 36TB vdev/pool for media
1 250GB vdev/pool for apps / VMs
Recently I have found myself in need of more storage, so I did what any rational person would do and purchased six additional 12TB drives. Now I would like to note that I figured those 36TBs would last me for years and by that point I would be looking at a different solution, but here we are, needing more.
So with those six additional, that would bring me to about 108TB. I figured at that point I may as well have one or two redundant drives. So I received the drives, SMART tested them and then zero’d them out to be sure. Now I am at the point of adding a new vdev and realized that even if I add the new vdev to the pool, the original vdev will not have redundancy. So the pool will be larger, but if I lose any of the original drives, I will be S.O.L.
My question to the community is what, if anything can I do with the resources I have now? Ideally I would like to not lose any data and be able to extend the vdev, but I understand that is not possible.
also, major kudos to the web/forum developers. I accidentally closed the tab while I was writing this and the draft was saved. You are absolute legends.
My long-term migration plan for these kinds of issues is to build a new pool with those new disks utilizing at least RAIDZ redundancy as you mentioned. From there you can use TrueNAS Scale’s Replication Task to push a full copy of your existing 36TB VDEV/ZPOOL for analysis/verification. From there you could then wipe your original pool once your backups were in place and use those disks to your maximum enjoy. I’m not a ZFS pro, but I have a general plan to go about capacity upgrades in this method when not only adding capacity, but also with new redundancy.
I’m sure there are others here with more experience that might have more insight, but that’s my $0.02. Best of luck!
Don’t need a second NAS to do that. You can just set up a second pool with redundancy and move the data there. After that destroy the first pool and attach the old drives to the new pool.
I was aware when I set this up that I would have no redundancy, and was okay with that until I actually got to the point of using all that storage. Now that I am here, I am paranoid and am no longer okay with losing the data.
I don’t know if that is a good or bad “hwhoa”
Admittedly they data is not $10,000 important to me
I can make that happen, my buddy has a server similar to mine still in box that I could buy off him.
As of right now, 26.92 TiB / 32.38 TiB
If I do that, I may as well do a full rebuild and just max out all 12 bays with 12TB drives. Definitely an option, just didn’t really want to spend that money right now.
@MikeGrok Mike - Thanks for your input. I guess my understanding of vdevs and ZFS in general is all wrong. Assuming there are nine drives at play here, three in use and six new, how would I go about this?
I guess I do not understand. If I create another vdev or two, wouldn’t they only add redundancy for the data in that pool?
Yeah so why not move the data from Media to thr new pool, then put the 3 media drives in a new Vdev. Move might take a bit but that’s just free real estate.
Although one thing to keep in mind is that when an entire vdev fails, the pool is dead.
Given those are 3 drives, your only option with redundancy is RAID-Z, i.e. 1 drive’s worth of capacity for parity. If one drive fails you can replace it, if during the resilver a second drive fails the vdev and therefore the pool are dead.
Hmm, so while it would work, it would not be ideal, right.
Perhaps I should go with @TryTwiceMedia and @FrenziedManbeast recommenations and get a new server and build new vdevs. At that point I could purchase three more 12TB drives and have a real decent chunk of storage!
If it were me, I’d add another three drive vdev to the media pool, and then use the other three drives to make a new share pool. Sell the 6TB drive, and at some point buy two more 12TB drives as hot spares for the share and media pools. That gives you more capacity, more redundancy, and still leaves one bay empty for swapping out replacements.
Or, since you’re not short on space in the share pool, use two 12TB drives in a mirror to make a new share pool, and use the third as your hot spare for media.
I think we need to clarify what vdevs and pools are. As you say earlier, redundancy in ZFS happen at the vdev level. A vdev can be a single disk, a mirror of disks, or raidz1, -z2, or -z3.
Pools are built from vdevs. You can have multiple pools on the same machine (so no need to build a new server!). A pool consists of one or more vdev(s). The vdevs in a pool are striped together, so if any vdev in a pool fails, you lose the pool.
So, for example, if you are okay with a single drive of redundancy, you could use your new drives to create a new pool that consists of two 3-disk raidz1 vdevs. This would give you 4 drives worth of storage (48 TB) and you could lose up to one drive from each vdev without data loss. Lose two drives from the same vdev and the entire pool is gone.
Once you have your new 48 TB pool, you could migrate the data from your Media pool to the new pool. Once that is done (and you have scrubbed the new pool and given it some time to make sure all is right) you can destroy your old media pool and use its drives to create another 3-drive raidz1 vdev, and add that to the new pool. Now you have all nine 12 TB drives in the same new pool and six drives worth of storage space (72 TB), with all your old data on it!
Edit: Another, perhaps better option is to create a new pool using your six new drives consisting of a single 6-drive raidz2 vdev. Less performance, but more redundancy. Migrate your data over from the old Media pool, buy three more 12 TB drives, destroy the old Media pool, and use the six now free drives to create another 6-drive raidz2 vdev to add to the new pool. This gives you 8 drives worth of storage (96 TB) and bandwidth, tolerance for losing any two drives at the same time, and the IOPS of two drives.
I should also mention the third level in the ZFS hierarchy: datasets. You have vdev, pools, and datasets. You create a pool from one or more vdev(s), and on this pool you then store your data in datasets. One default dataset (with the same name as the pool) is created automatically on pool creation. Data migration and snapshotting is done per dataset.
So it’s very possible to migrate the data from your Share pool to your new redundant pool as well, once the latter is created. In general it’s a good idea to not store data in the default “root” dataset on a pool, since that makes it more complicated to handle snapshots and migrations separately per dataset. You can also set different options for each dataset depending on the kind of data stored (which I won’t go into now since it’s a huge topic).
Edit: The data migrated from your two current pools to the new redundant one will end up as separate datasets.
for best practice, all vdevs in a pool should have the same amount of redundancy. You should never add a vdev without redundancy to a pool that has redundancy. Once there is a vdev in a pool that does not have redundancy, the entire pool is not redundant.
I did not recommend getting a new server - re-read my post a few times. I said you should build a new pool with the new disks, then Push Replication to them from your current Pool that lacks redundancy in its VDEV(s). Once you have the newly replicated data in the newly build Pool also backed up, you could then delete the original disks and add them to whatever Pool(s) you desire as one or more new VDEVs.
you could do mirrored VDEVs with the 12tb disks and once everything is copied into the appropriate dataset, add a pair of 12tb drives from the old media pool as another VDEV. also usually easier to add 2 disks at a time.
like everyone else said, you should not do single disk vdev/pools. and datasets are probably a better way to organize your data.
Also I would put one ssd for the boot drive, and make sure you back it up somewhere. Also make sure that you move the logging directory to somewhere on your data drive, as that is the only thing that will fill up. You can do this with a mount point so that if the data drive is unavailable you can still read your logs.
1 ssd combination slog and l2arc with 16GB for the slog.
what you do with your other drives really depends on what you need, and you didn’t tell us that.
One of the problems with raid-z1 arrays is that when a drive dies, then you swap the drive, and the array starts getting rebuilt, often a second drive dies, now you have lost your whole pool. With raidz2 you have more wiggle room, but you loose more space to redundancy. Sometimes a third drive dies, but I have only seen that twice in the last 30 years. In the first case it took a week to restore the data off of DAT tapes. In the second case it was a 17 drive raidz3 that I was migrating to a different machine, and instead of making it whole, I completed the data migration while using it in its degraded state.
Now that you know some of the options, it is up to you how safe you want to be.
oh yeah, make sure you setup a backup task to backup the boot drive to a data set on the newMedia volume. It is OK for the boot drive config to die, you should be able to restore it in a half hour.