FreeNAS coral Released - formerly FreeNAS 10

Are you planning on using it as a ZIL or L2ARC? Did you evaluate if there are any benefits at all for your setup? Unless you have an enterprise setup, you will most likely not feel a difference.

This might be worth reading before you think about adding an SSD to your ZFS storage:

It was a DNS issue from the start. Luckily pFsense listed my DNS in the dashboard and I was able to connect with those with bridged static IP.

2 Likes

Yeah i didn't expect the jails would migrate in any sort of fashion, so it is ok.

If FreeNAS ends up using swap again, I will see if throwing in an SSD helps. I recently upgraded the ram to 16GB and have 5x 2TB drives. But I tend to run allot of applications, up to 12 docker containers so far.

I was wondering if anybody has managed to setup freenas 10 with an ubuntu vm and have successfully mounted a dataset from the host to it?

I have an ubuntu 16.04 vm inside my freenas install to run nextcloud. The ubuntu vm template has a very small os drive (16G or so) and thus I need to mount another dataset/volume to actually hold all my files. Problem is that I cannot figure out how to simply share a data set from the host to my ubuntu vm. Sharing datasets to jails in freenas 9 was much easier than this. Every time I add a volume (vt9p) I see it gets connected when I launch the vm, but I have no idea where it gets mounted. I do see it creating the dataset on my host volume however....

Anybody have luck getting host datasets to mount to a VM in corral?

what the volume option means I have not figured out yet - but if you add a disk (device) it will show up as /dev/sdb as it will be the 2nd disk.

That is how I managed to add enough storage for my NVR vm

1 Like

@Th3Z0ne

I was afraid this would be the only way around it at this point in time. I don't like this solution too much because it requires me to set a fixed disk size. That and I have to deal with all of my data being on another drive inside of the VM (ugh I guess I can symlink). I appreciate the advice, and I may end up going this route just for the time being.

I sadly did not find any other working solution - even enlarging the original "os" volume from the Freenas "storage" section did not help.

That will be my attempt too. Tried to install on the machine itself but it just does not like it. So I am probably going to pass through my LSI controller and run FreeNAS in a VM. .... Well, I'm gonna try.

@Th3Z0ne

!!! I FINALLY FIGURED IT OUT !!!

I have finally learned, after some digging in the freenas forums, how to successfully create/mount a volume to an Ubuntu virtual machine. You can find the relevant info I used in this forum post.

Basically you create the vt9p volume under your vm's volume settings or by following the directions in the (albeit very incomplete) freenas corral wiki found here. Make sure when you do this the vm is powered off. I created my shared volume using the freenas gui for my vm and checked the Auto box instead of specifying a target location.
See my settings below:

Then you have to mount that v9 volume yourself manually inside the VM. This is the part I could not figure out until I saw the freenas forum post on how to do it. You just run the following command:

EDIT (Updated commands for clarification):
sudo mount -t 9p volume name /location/on/guest
so in my case it would have been:
sudo mount -t 9p data /mnt/data
and voila. Obviously you will need to have created your mount point before you mount the volume.

If you desire, you can have this volume be automatically mounted when your VM boots if you add the command to your /etc/fstab/

EDIT2 (I figured I should post my /etc/fstab entry as an example, more details can be found here on this:
data /mnt/data 9p -t defaults,cache=mmap,msize=512000 0 0

Now I can finally setup my Freenas Corral box properly. The docker containers/templates in the current release are horrible. The nexcloud one uses sqlite (yuck). I have yet to find any good documentation on using the docker containers either.

For now I am simply going to create separate VMs for all of my different web-services for now. I really hope the freenas folks add these mounting instructions for 9p volumes to their wiki soon.

Hopefully this helps you folks out.

2 Likes

Swapping out memory isn't something bad, as long as it happens in "healthy amounts". The system will always evict/swap out memory eventually, when it feels like it hasn't been used in a long time, even if the RAM isn't exhausted yet. And as long as the system doesn't swap out constantly, the effort of swapping memory might not be worth the extra cost of adding hardware.

You should also have an understanding of the two different kinds of memory a system keeps in RAM. Here is an interesting post about how RAM and swap work together:


On thing to remember with ZFS is, that ZFS loves to have a lot of RAM. The FreeNAS-Team recommends about 1 GB of RAM per TB of storage as a minimum. By that rule, you already use up 10 of your 16GB, which leaves 6GB for FreeNAS itself and all your jails. Depending on what kind of jails you run, this might not be enough. If you can afford it and your system is compatible, you might be better off upgrading to 32GB of RAM instead of adding a SSD.

1 Like

I just found a new quirk - the UPS service does obviously not mind the shutdown timer - as soon as the APC UPS triggers its event the system shuts down despite actually having a 10min timer set in the GUI - before the update that did work as expected.

1 Like

Well speaking of UPS shutdowns, I had a power outage this morning and while my NAS shutdown properly(@Th3Z0ne I only had the shutdown timer on 30sec so I can't confirm/deny your issue), I'm having weird issues since I booted it back up after work.

Basically SMB will not start so none of my Samba shares are working; in the UI it "helpfully" just says error on the SMB service page. After digging around the CLI I found the following log:

Log Output
unix::/service/smb>logs show
 Timestamp             Message                                                                           

 2017-03-21 01:50:34            Could not test socket option TCP_KEEPCNT.                                      
 2017-03-21 01:50:34            Could not test socket option TCP_KEEPIDLE.                                     
 2017-03-21 01:50:34            Could not test socket option TCP_KEEPINTVL.                                    
 2017-03-21 01:50:34            IPTOS_LOWDELAY = 0                                                             
 2017-03-21 01:50:34            IPTOS_THROUGHPUT = 0                                                           
 2017-03-21 01:50:34            SO_REUSEPORT = 512                                                             
 2017-03-21 01:50:34            SO_SNDBUF = 9216                                                               
 2017-03-21 01:50:34            SO_RCVBUF = 42080                                                              
 2017-03-21 01:50:34            SO_SNDLOWAT = 2048                                                             
 2017-03-21 01:50:34            SO_RCVLOWAT = 1                                                                
 2017-03-21 01:50:34            SO_SNDTIMEO = 0                                                                
 2017-03-21 01:50:34            SO_RCVTIMEO = 0                                                                
 2017-03-21 01:50:34   [2017/03/20 20:50:34.680633, 0, pid=1460, effective(0, 0), real(0, 0)]            
                       ../source3/lib/util_sock.c:396(open_socket_in)                                    
 2017-03-21 01:50:34     bind failed on port 137 socket_addr = 192.168.1.255.                            
 2017-03-21 01:50:34     Error = Can't assign requested address                                          
 2017-03-21 01:50:34   [2017/03/20 20:50:34.696322, 0, pid=1460, effective(0, 0), real(0, 0)]            
                       ../source3/nmbd/nmbd_subnetdb.c:127(make_subnet)                                  
 2017-03-21 01:50:34     nmbd_subnetdb:make_subnet()                                                     
 2017-03-21 01:50:34   Failed to open nmb bcast socket on interface 192.168.1.255 for port 137. Error    
                       was Can't assign requested address                                                
 2017-03-21 01:50:34   [2017/03/20 20:50:34.711855, 0, pid=1460, effective(0, 0), real(0, 0)]            
                       ../lib/util/become_daemon.c:111(exit_daemon)                                      
 2017-03-21 01:50:34   STATUS=daemon failed to start: NMBD failed when creating subnet lists, error      
                       code 13

Anybody have any ideas? I've tried changing around some of the settings and the usual Google-Fu with little results.

Sound advice. I noticed a benefit from upgrading from 8 to 16GB ECC Ram.
However, I won't do any further upgrades to the current hardware, it is pretty much maxed out. Maybe if I chance upon some surplus IT equipment and build a new one. I have some SSDs lying around, putting them in is free, buying more ram isn't that cheap.

I guess I am wondering if ARC can be shunted more to an SSD, rather than consuming so much Ram. Ram speed is not necessary, since all my networks are going to be slower then a SSD.
Can adding SSDs for L2ARC or ZIL open up areas for data corruption? Or do they use the ZFS file system as well?
ZFS, ECC ram are great at reducing errors that creep in over time in the storage and operation of the system.

Whoops did I say 5x 2TB drives? I have 5x 3TB drives in Raid Z2. 8.16TB usable, 5.48TB parity.

I see what you mean, but you also have to keep in mind that they will wear and consume power, so if they have zero benefit for your setup, then you are better off without them ;-)

But you will still feel a difference in your jails, if they work a lot with the data. Also keep in mind that the latency is greatly reduced if you have your files cached.

I am not a ZFS expert, but from my understanding it doesn't really matter much. ZFS doesn't trust the hard drives and the ZIL (and L2ARC) is part of that. To my understanding, if there is data corruption in the ZIL while writing to the disks, it's the same thing as if it was written successfully but the hard drive reported false data on the next read. ZFS is built to correct these errors without failing the whole drive or even array of drives, so it shouldn't be a big deal.

Bitrot is also not a problem since the data in the ZIL is very short-lifed. Right after the data is written to disk, the ZIL is flushed/overwritten. The L2ARC has a similar situation.

16GB is really not much then, since you already need 15 GB as a minimum according to the FreeNAS team. Running (that many) jails on the machine might not be a good idea in that situation.
Maybe you can gain some performance by setting the swappiness, so the system prefers application memory over file caching (as mentioned in the link from my previous post) but you should definitely do some own benchmarks to see if there is a (measurable) benefit for your setup.

In the end, you can just give ZIL and L2ARC a try and see if there are any benefits for your application, if you already have the drives collecting dust ;-)
But from my impression you will probably not benefit from a ZIL if you aren't working with a lot of synchronous writes and a L2ARC will probably only be beneficial if you are working with a large set of data all the time that the system has to constantly load parts from disk instead of having it cached completely in memory. So if you are fine with disk speeds, there is no benefit really.

1 Like

I just updated from the lastest 9.0 version and had some questions?

  • After updating it's saying SMART status warning, but if I look up that drive its passed every short and long test? I did try to do a scrub it took about 48hrs then said it failed. 7x 3TB drives.
  • On FreeNAS 9 it would complain about the controller having different firmware to the driver but now it does not? Is this no longer a thing you have to worry about.

That is what I do, z800 with onboard LSI flashed to IT mode. A piece of me wants to run FreeNAS baremetal and enjoy some of the built it support like plugging into my APC, but ESXi has been reliable for all the other needs of mine, so FreeNAS stays virtulaized.

virtualization within FreeNAS has been buggy, first run at 10 and I get the 'cpu does not support virtualization' message, then on a later update it let me spin up a docker for Plex, then with the latest update, back to that non support message. Just spun up a headless ubuntu server on ESXi, mounted the SMB and run plex from there.

May want to try a bug report I had same problem on Freenas 10 when i used same drives from 9.10 freenas. Smart warnings that 9 didn't show.

Is it a specific drive?

What happens when you go to the console (not cli) and run a "smartctl -a /dev/ada0" but targeting each of your NAS drives?

See if any of the parameters are failing. It can he hard to interpret the info, but usually when the value decreases until it reaches the threshold. The raw values might make more sense in some cases.

So, I had some problems with FreeNAS (it noped out... constantly), so I searched around what I could use instead. Turns out proxmox supports ZFS. I had no idea! I know it does things differently than FreeNAS but I'm sure I can achieve what I want, which is simply "have ZFS storage" and "make VMs go".

1 Like

Yea i think Freeans 10 will become a very powerful system over this year but even though it has hit release cycle it doesn't seem to be as polished as it should be for us to be raving about.