Best raid type for random drive sizes in freeNAS

What would be the best raid configuration to get the best redundancy to capacity ratio when using odd sized hard drives in a home made NAS.

This could apply as a general answer for anyone in a similar situation but I will give details for my specific situation.

I am building a home media NAS using freeNAS as the OS and to start out I will be using a 2TB, 1.5TB and 640GB hard drive. My question for my situation is what would be the best configuration to allow for expanding the array as well as replacing the older drives with NAS optimized drives later down the line. I would like to have the most storage space possible while still having some redundancy to account for the old drives potentially failing.

Have you gone to the Freenas site and read the 286pg manual and some of the experts have written some really helpful guides. If I remember correctly its Ill advised to use a bunch of different size /brand drives for freenas but there is another system i saw on techzilla that does support and is best for that. sorry I am a newbie and I cant remember the name but search Nsa on tekzilla on utube and you will find it. Hope that helps! I just got my freenas up and running today myself now I am trying to learn mythbuntu stuff.

 

If you have to have redundancy, I would mirror the 2TB and the 1.5TB drive, and forget about the 640GB one. This will give you about 1.5TB to begin with. When you go to upgrade, replace the 1.5TB with a 2TB first, and you'll be up to a 2TB pool. At this point you can mirror the 640GB and the 1.5TB to add 640GB to your pool, and data will stripe across the 2TB mirror and the 640GB mirror. Then you can one at a time replace the drives in the smaller mirror to expand the available storage. Rinse, repeat.

With raidz configurations you would need to be adding at least 3 drives at a time, versus mirrors which only require 2 drives to create a vdev. Pools stripe across vdevs, and vdevs can be added at any time to expand the pool. But raidz vdevs cannot be expanded by adding more drives. You would have to follow a complicated procedure to add drives to a raidz vdev, and it involves copying all the data from the existing vdev to a new one, which is not fun. You sacrifice more drive space using mirrors, but it is much more painless to expand.

Once you run out of slots in your chassis, then it may be possible to switch over to a single raidz2, depending on the number of drives you have in use. For example, if you have 4 drives in 2 mirrors, then get the last 2 drives you can fit in the chassis, then you could split the mirrors, make a 6 drive raidz2 vdev in a new pool using the 2 new drives, 2 drives split from the mirrors, and 2 temporary file backed devs. Then you delete the 2 files. The pool stays online because raidz2 can tolerate 2 drive failures. Then you send the old pool to the new pool, destroy the old pool, and replace the 2 file backed devices with the 2 drives from the old pool when the transfer is complete. Optionally you can then rename the new pool to have the name the old pool had. For simplicity this example assumes all the drives are the same size.

On the other hand, if you had 6 drives in 3 mirrors and got 2 more to fill the chassis, you couldn't pull that off. You'd have 3 drives still in use after splitting the mirrors, but you can only have 2 placeholder files for the redundant drives in raidz2, so you'd be short one drive. You could use raidz3, or you could just have a leftover drive doing nothing in this scenario.

So again, I recommend striping across mirrors, because of how complicated you can see it is to expand raidz vdevs. You run a higher risk of losing the pool if both drives in a mirror fail, so it's ultimately a tradeoff you have to decide for yourself. But as far as ease of expansion goes, striping across mirrors is the way to go.

Well the only array type you can use really is JBOD, which has no redundancy at all. Which is not ideal at all. If the disks are of different interface types (IDE/SATA) you will have to use SPAN (same as JBOD just allows different types of disks)

If you were to run these in RAID of any type the max size of any disk would be the size of the smallest.
Eg. If you wanted the 3 disks they'd all be partitioned to 640GB to work. (Or 1.5TB if you used the two largest in a RAID 1)

BeyondRAID is about your only option in terms of trying to offer redundancy with a JBOD (but still I think lots of capacity is lost)

As deathadderpaul says, it is ill advised to mix drives of age/capacity/interface into an array as recovering data can be complicated, costly or impossible.

Personally, I'd run the disks independently and manually make sure there is 2 copies of data between the drives. Move to RAID when you can actually get a controller and matching size disks. (Dell 6i are bloody cheap and good, although need modification to the heatsink for non-server chassis)