Frankenstein NAS

It is going to be loaded with a mishmash of hardware and drives, going to be headless, do i go with FreeNAS(No ZFS) or ubnutu server?

Well, I don’t think you have an option of not using ZFS in modern versions of FreeNAS. I’d still recommend FreeNAS. My FrankeNAS is doing just fine with hand-me-down hardware.

if you want something that is quick and dirty that will run on almost anything than i would recommend

Is for the most part headless besides a inital setup iirc. if you want anything more advanced then i would reccomend going with freeNAS like @Levitance said

1 Like

It kind of depends on what hardware you are going to mash into it… But that said if you got the time, here’s another vote for FreeNAS. It’s damned good and the new UI is due to drop soon. :grinning:

I have an ivy bridge i3 with 6-sata3 ports, 8gb (non-EEC) RAM.

Just a file hoster/torrent/windows backup

Can you run freeNAS with basic volumes (not ZFS)

Probably if you want to get the storage setup at the command line. But at that point, there’s no real reason to have an appliance. Your hardware specs are up to snuff to run ZFS though. It’s not like it requires a whole lot of horsepower.

If you have a bunch of different sized disks that won’t work well with zfs but you still want redundancy and pooling the you might try btrfs. A btrfs mirror will just keep two copies of the data on different physical disks, so it doesn’t matter if all the disks are different sizes like it would in a traditional raid1 mirror.

You don’t have to use zfs on freenas, you can use ufs or whatever to standard bsd file system is. But at that point you’re probably better off using something like open media vault which is just standard debian with a NAS focused webui.

I have 2-2tb paired drives and 4 other mishmash drives. redundancy is not that much of a priority. From what i understand zfs needs ECC ram.

It doesn’t need it any more than any other file system. Which is to say you don’t need ecc memory for zfs.

1 Like

Then I’d still argue lots of times it is better to pair the drives you have where possible and stick with ZFS.

My thought was to have a few different pools, one being 2-2tb raid-0. is it even ZFS’n those?

My advice with ZFS, don’t bother with the special modes, as they are a pain to try and add/expand later. General consensus (as fast as I’ve found) is create paired raid 1s, and then if you need more redundancy, raid 1 the sets, or for performance raid 0 the sets. So for instance I have a 6 drive setup, 3 raid 1 sets, raid 0’d together. That way if you need to expand you can just drop in another set, or add to the sets as needed. You still get all the goodies of ZFS no matter what raid mode you use as its all in the file system. Also ECC is not necessary, though it is nice for extra data “assurance”, the main “gotcha” of ZFS is it’s very memory hungry as all writes are to memory first and then drained to the drives, so in high IO cases you can burst through all your memory fast but given your use cases it should be fine.

To get a few things straight.

ZFS stripes all VDEVs in a pool. Your VDEV contains the redundancy. If you add mirrored disks to the pool you will have striped mirrors.

ZFS can’t do mirrored stripes and I would not recommend these ever. A single disk failure will take out the full stripe, and you will have to rebuild it completely. Also it is bad for IO queuing.

If you create a zpool of single disks, it will be a de facto stripe without redundancy. You’ll get error checking and reporting. A single block error will corrupt a file, a disk failure probably means losing the pool. You can still snapshot the pool and send/receive the data, or benefit from transparent compression or clones.

1 Like