Anyone else think FreeNAS is going downhill? Alternatives?

I’ve been a big fan of FreeNAS since I learned about how cool ZFS is back in the TekSyndicate days. I’ve used it for about 2 years now, and it was pretty good at first. The Corral version seemed nice but that turned into a nightmare pretty quick. Now (it seems to me at least) it’s just a cobbled-together unstable mess. The old UI is very buggy for me (randomly it will just refuse to load any pages, and I can’t do anything till I restart the whole NAS) and the new UI seems like it had potential, but right now feels like it was tossed together in 2 days.

Am I the only one that’s having these sorts of issues? Should I migrate to a more unstable build of FreeNAS 11? Is there a better alternative (looked at NAS4Free and Rockstor but haven’t tried either)? Other than the Samba integration, it’s been more pleasant to deal with ZFS purely from the command line on my Proxmox machine than it has been to try to wrestle with FreeNAS. But maybe it’s just me; I’m curious to hear everyone else’s thoughts on it.

1 Like

X2, not too thrilled with freenas. That UI they ditched was actually nice.
Linux forums and community help have come a long way in regards to helping newbs, but freebsd based builds like freenas and pfsense remind me of the more elitist toxic forums of Linux past. They are not insanely toxic but enough so that I wish there were further along alternatives.

I tried rockstor before freenas and really liked the GUI, but IMO at that time btrfs was not ready and my whole test raid5 shat the bed. ZFS is robust, and I ran a headless Ubuntu with zfs datasets, but wanted a GUI so imported them to a freenas build. I regret it. I wish there was a GUI for Linux to make builds that rival freenas and pfsense simply for documentation and forum supports sake.

2 Likes

I’ve only used zfs on Ubuntu server and other than missing some of the features the bsd version has its been pretty good. There is NAS webui fronted type thing for illuminos which might be worth looking at, I can’t remember what it’s called but it runs on top of omnios.

But for me I prefer to just configure it through the command line.

ive been using nas4free, a fork of freenas. very stable ,web interface has no glitches or other drama.

1 Like

I was on the fence between free nas and xpenology when i shrunk my servers . ended up going with xpenology and no problems so far.

1 Like

I have been running FreeNAS for a few years now and am pretty happy with it.
I fell into the trap of getting the asrack board with umteen sata ports, ECC ram, and all that. Admittedly, it seems ECC ram is not neccesary to maintain data integrity, and I only run 5 HDDs + an SSD. But it has worked well and done pretty much everything I needed it to do.

I don’t have too many issues with the interface. I only do a few things with it, such as managing jails and data pools. I don’t use the plugins much, if at all. I found it easier to keep things up to data installing software manually. It doesn’t look like a modern metro interface, but it works.

I have had issues trying to run VMs in FreeNAS 11. That doesn’t seem to well streamlined.

As for whether FreeNAS is going down hill, I think they fell down the hill when Corral went wheels up. Wouldn’t have been good times during all that. But they seem to be back on track fixing issues and working out new releases.

I don’t know if there is anything nicer out there. I don’t want to uproot the system to try new stuff. I could run some VMs on my laptop and try them out perhaps. But, not real world testing conditions. Different software normally brings around a different set of problems. I am happy to wait until FreeNAS with Docker support comes out, and then consider upgrading my hardware.

1 Like

I upgraded from 9.3 to 9.10 right around the time 11 came out. Zero issues so far but then again zero updates since I installed it, which does make me worried about security.

I’d consider alternatives, but I really like/want ZFS.

1 Like

I have used FreeNAS in the past but for a while also went with Xpenology because of the Hybrid RAID that Synology use. More recently the full Synology OS has been patched to run on non-Synology hardware so installed that.

Just bought a QNAP though as in the end I just wanted something that worked without much tinkering and was available 24/7 but mostly in a very low power state.

1 Like

Running FreeNAS for a few years as well. I’m currently on 9.3 and have no issues.

I’m no power user either; my setup is fairly simple.

I’ve had more trouble with Plex recently.

1 Like

I believe that you can manage ZFS in the proxmox web gui… after you’ve created your zpool.

Yeah, I tinkered with ZFS pretty early on with Proxmox so I got decent at the command line, and personally I think it’s more cumbersome to use the FreeNAS web UI than the command line most of the time.

You can do some management tasks with it, but you have to do the initial pool setup from the shell. And AFAIK there’s no easy way to automagically schedule scrubs like in FreeNAS, but that’s as simple as a little SH script in Cron.

2 years or solid performance sounds good to me. Have you checked logs to see if there are any errors being thrown?

Something is wrong there. That UI is very mature. You shouldn’t be experiencing problems like that.

As I understand it, ECC eliminates a very small possibility of data corruption. ZFS trusts the RAM, so if a solar flare flips a bit or whatever, it could cause problems. But you could run multiple ZFS rigs for years on non ECC memory and be fine.

This is deviating slightly from the topic, but advice seems to change every time I search the topic.
My understanding, last time, is that ZFS checksums the blocks in memory the same way it does on disk, so you can only really get corruption if the memory bits change in such a way to cause a SHA256 hash collision. I am happy enough to take that risk, since it could also occur on disk or anywhere really.
This article was linked in other posts on the same topic: http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/
All that said, the system itself will reap the benefits of running with ECC and could potentially crash less and straight up halt when things get really bad, rather than go into “Kill all Humans” mode.

2 Likes

That was very informative, thank you.

I still wonder about how an “evil” stick of RAM might affect incoming information as it’s initially written to the disk, or if it would send corrupt data out of ARC.

It’s interesting to know that the mods at FreeNAS have apparently been fear mongering about ECC and scrubs… that whole 1GB RAM per 1TB storage is also much looser than they would have you believe. I wonder if iXSystems is just trying to sell more, pricier DIMMS… ha

ECC RAM probably helps for system stability. It might lead to fewer support issues for random non-reproducable errors.
Before information is blocked up and checksummed, I am sure it is subject to errors. I do not know the system well enough to comment on this. But, it would be as susceptable as any computer anywhere.

There is also a risk the data could corrupt slightly when copying it into the system. I guess you should always checksum the data after loading it in to aleviate this risk, if you feel the need. Also a non-zero chance both sets of data could be different and produce the same checksum value.

Like most things, you find the amount of risk you are willing to accept for the work you have put in. ZFS allows me to keep data accurately. There is a non-zero chance that without ECC ram, a block could get trashed. 1/Graham’s number is also a non-zero number. I cannot comprehend how small it is, so I am happy to wear that kind of risk.

So, I see a few people are still chugging along with FreeNAS 9.3. That is pretty old now. I guess there is the “If it aint broke, don’t fix it” idea. What is holding you back from updating to 11?

1 Like

I’m staying on 9.10 until 11.1u1 based on the FreeNAS bug tracker.

I have heard anecdotal accounts of the 10GbE NIC support in FreeBSD 11 being much better*, so I’m looking forward to it.

* 6th reply down by c32767a

1 Like

Here’s a short story:

In my old job, our backup procedure was to tar+gzip files into backup archives, and then to reread, uncompress, and compare files.

At one point, we had an issue with flash and ext4 filesystem that caused data corruption (4k blocks zeroed out), in our MySQL innodb files. Those files happen to be organized in 16k blocks that happen to contain checksums at the beginning and end of every 16k block.

Thanks to our backup procedure and multiple levels of checksums, we had enough confidence to be able to hot patch the 4k of data back.

All servers and workstations involved had ECC ram.


The main takeaway from the story is that the easiest way to get reliability for your important family photos, important documents, and other important data, is to handle your data in an organized and simple to understand way, and to make backups you can trust.

Neither ECC, nor backups you put into Amazon glacier and probably never read, not ZFS, can 100% guarantee that the files will contain the data you want them to contain, but they help and are a good idea.

What I’ve observed at work, is that the biggest enemy of data integrity is not flaky hardware, but actual humans who launch new things, do upgrades all the time and so on, having backups you can trust (more than one backup) is key in mitigating that risk.

3 Likes

Exactly this. I may try 11 on a separate USB stick and see how things go.