NAS, what do you think?

So this is my first post so please excuse any mishaps I make. In the post I will go into all I have done so far, specs for the system will be at the bottom, and my goals for the system are there as well.
This system is meant to be a NAS for a few web servers and my zfs storage system. Id love to have all of my vms and dockers on the zfs to be stored safely. My biggest part of the system I want is with my current setup I don’t think my data is very safe. So data safety is number one.

What I have done-
So when I bought the server I loaded ESXI onto it to start, made a few vms but I also had a raid card server that was literally just a raid card and 2 cores holding all of my data. Recently I decide to combine both servers so buying another 2 TB drive and going zfs through a vm on ESXI…
That method is working fine, but my issue are as followed. ESXI is highly volatile, if I loose power on this server Ill usually end up with a pink screen of death. ESXI also doesn’t allow direct access to the drive, the closest you can get is setting them up as RDM drives then passing them to the vm. My biggest issue with the system in its current configuration is the fact that if my ssd with the operating system dies, then so does the information of the vms with it and drive information. I have heard without that info you can get access to the drives.
I then tried freenas, and I have so many complainants about the software not working on my particular system.
So for now till I find a better solution I am on ESXI with ubuntu server and all my other vms.

What I think might work-
So I was looking at Ubuntu Server because it has been very good to me so far, but then I realized that I would need a graphic interface for VM’s (being able to configure). So I was looking at Ubuntu and maybe id go through and remove all the bloat ware I don’t want (though that would be alot of work). Then id go through and just install zfs, and my docker programs. Then for VMs I was looking at xen for Linux vms and oracle for windows. This is just what I have thought because of my ‘What I have done section’.

Be able to run VM’s (Windows, and Linux), dockers containers for caching steam downloads, zfs storage pool, machine learning capability (cuda, etc), github setup, Link multiple zfs pools into one giant pool.

3x2TB seagate drives
1x120GB SSD
1x230GB General Purpose Drive (might use as docker drive or something)
2x4 core x5450s
32 GB of ddr2
2x GTX 980s (for machine learning)

Please give me any insights or potential spots for failure. Thank You!

Give Unraid a try. A single license is cheap, and worth it in my opinion.

I had used FreeNAS for quite a while, but I kept running into hiccups, particularly around the time when I’d update the server.

I switched over to Unraid and haven’t looked back - it’s simpler, clean, does everything FreeNAS does, and I haven’t had a single problem with the server since - just runs (knock on wood haha!).

You say RAID card and are wanting to use ZFS. What RAID card are you talking about specifically. The reason I ask is that ZFS wants direct access to the drives, meaning RAID cards are a no-no. HBAs are what you want for ZFS, and could be why FreeNAS was giving you issues.

I’ve ran FreeNAS as a VM, passing through a Dell Perc H200 flashed to IT mode to act as an HBA, and had no issues with the drive performance. Once FreeNAS was setup, I backed up the config, so such that if the FreeNAS VM needed to be recovered, I could just reinstall FreeNAS on my local VMFS datastore, and then reload the config, and it’s good as new again.

sorry to clarify that was part of a old ivy bridge board and a raid card I had lying around with raid 1, there was no zfs part to it. I then went to freenas with the 3 drives and that did not work, and gave me headaches and i didn’t get the impression xsystems (whoever owns it) knew what they were doing. Now I am on esxi trying to find a better solution. Hope that helps

I have tried unraid a bit, I didn’t like the fact I had to pay for drives but it did work as advertised. I might look into that. Does it allow you to link mutiple storage pools to show up as one big one?

You know, I’m honesly not sure - I only have 4 hdd’s in there in a single pool… someone else might be able to answer that for you…

As for payment, I just bought one license - I don’t think you need more than that no matter how many drives you have, but I could be wrong, it’s been years now since I set this up.

Ah, Ill have a new server in December coming to me, 16 core beast, but my old server still works, so buying 2 keys wouldn’t be horrible but at $200 in total could be a new drive. Plus I need them to be able to link the 2 zfs pools.

Ah I see - the base license is what I got and was ~$60 so not as big a deal for me as I don’t run a ton of drives.

Yea I wish they didn’t have that, thats like the one part that makes me iffy on them, but when I was trying to do a all in one gaming/nas it did work pretty well. I also don’t know how open source they are, not a requirement but would be nice.

If you want ZFS and virtualization and freenas doesn’t work for you, then maybe look at proxmox? Some good tutorials on that I used to try it out.

Given you are worried about loosing the system drive, seems like you should be mirroring it too, or having a working backup system.

Yea that worry only stems from ESXI though. From my knowledge the only way to do raid on esxi is todo hardware raid. And thanks ill check it out!

I’m still confused how your 3 drives were being connected and how it didn’t work

While it’s janky, it is possible to run FreeNAS in a VM, passthrough the hard drives with Raw Device Mappings (as long as it’s an HBA or SATA chipset set to AHCI and not RAID mode on the motherboard), create a zvol and share it through iSCSI and then add it back to ESXi. Though this makes reboots a pain in the ass, as you need to start the FreeNAS VM, rescan the iSCSI settings and rescan for the Datastore to show back up. I’m sure there’s a way to set this up with a startup script, but I just wasn’t at a point where I could deep dive into it.

I could be mistaken, but doesn’t ZFS not like being in a VM?

It does not like not having direct access to the disks.

So, therefore, with a standard VM using a virtual disk(s), don’t use ZFS.

It is ok to use ZFS in a VM if you passthrough an entire SATA/SAS controller(the attached disks come with) or passthrough individual disks(called raw device mapping in esxi).


Exactly this

1 Like

May want to check your link. Did you mean ?

Thanks for the correction @Tavor, spelling glitch on my part. Web search “ proxmox”, probably the best for finding the tutorial, but we seem to be trending offtopic a little.