Well I took a another crack at getting a Plex Docker container working and got it figured out. It's not very good at telling you which settings are wrong, so I was having trouble figuring out where it was breaking. Also since I don't have a Pfsense box at the moment(was virtualizing it on ESXi) I'm stuck with DHCP for the most part since I'm using my Netgear router.
Also it was well into the early hours of Thursday morning when I was working on it the other day so there is the tired factor to consider. ;-)
I'll just go ahead and report back my experiences.
So, ugh. I initially did a fresh install and tried to migrate my disks over. Nothing, would work right. The web panel was laggy as hell, the services didn't work right, and I couldn't get my network shares to appear. It was a shit show.
Okay, attempt two.
This time around I thought I'd upgrade my existing 9.10.2 install which I still had on a different USB. So, that was better. It had some bugs still, so I thought I'd try yet another fresh install on a different USB.
Try three.
Using a new USB this time, different to either the first or second try. Used the same install media as I did for the first initial install, and now it works great. Not quite sure why it didn't work right the first time, especially since I used a superior usb thumbdrive for the first install. Now that it's working right, I like it. I've got my shares working now, and I'm working on setting up some VM's to play with. I haven't been able to get a windows vm working quite right yet, not really sure why.
EDIT:
Now after leaving the system idle and coming back to it, I'm noticing much higher cpu utilization than I was used to observing on 9.10.2. Don't know if that's just differences in monitor or actually more tasks going on at iidle on 10 compared to 9.10.2. The system is an 8350 with 8GB ram for reference.
Right of the start - I have a debian VM now in FreeNAS10, is there a way to increase its disk size or add another disk device? - so far my googeling wasn't all to successfull
When I worked with the beta on bare metal a few times I found running it from an SSD helped with UI lag and such but doesn't fix all of it. Even though it is out if beta I believe they still have work to do with UI optimization.
I don't have Pfsense right now, but here are my network settings that are working. Ignore the 192.168.1.x network; that's just for the direct 10gbe connection to my desktop.It doesn't carry any internet traffic.
Are you planning on using it as a ZIL or L2ARC? Did you evaluate if there are any benefits at all for your setup? Unless you have an enterprise setup, you will most likely not feel a difference.
This might be worth reading before you think about adding an SSD to your ZFS storage:
Yeah i didn't expect the jails would migrate in any sort of fashion, so it is ok.
If FreeNAS ends up using swap again, I will see if throwing in an SSD helps. I recently upgraded the ram to 16GB and have 5x 2TB drives. But I tend to run allot of applications, up to 12 docker containers so far.
I was wondering if anybody has managed to setup freenas 10 with an ubuntu vm and have successfully mounted a dataset from the host to it?
I have an ubuntu 16.04 vm inside my freenas install to run nextcloud. The ubuntu vm template has a very small os drive (16G or so) and thus I need to mount another dataset/volume to actually hold all my files. Problem is that I cannot figure out how to simply share a data set from the host to my ubuntu vm. Sharing datasets to jails in freenas 9 was much easier than this. Every time I add a volume (vt9p) I see it gets connected when I launch the vm, but I have no idea where it gets mounted. I do see it creating the dataset on my host volume however....
Anybody have luck getting host datasets to mount to a VM in corral?
I was afraid this would be the only way around it at this point in time. I don't like this solution too much because it requires me to set a fixed disk size. That and I have to deal with all of my data being on another drive inside of the VM (ugh I guess I can symlink). I appreciate the advice, and I may end up going this route just for the time being.
That will be my attempt too. Tried to install on the machine itself but it just does not like it. So I am probably going to pass through my LSI controller and run FreeNAS in a VM. .... Well, I'm gonna try.
I have finally learned, after some digging in the freenas forums, how to successfully create/mount a volume to an Ubuntu virtual machine. You can find the relevant info I used in this forum post.
Basically you create the vt9p volume under your vm's volume settings or by following the directions in the (albeit very incomplete) freenas corral wiki found here. Make sure when you do this the vm is powered off. I created my shared volume using the freenas gui for my vm and checked the Auto box instead of specifying a target location. See my settings below:
Then you have to mount that v9 volume yourself manually inside the VM. This is the part I could not figure out until I saw the freenas forum post on how to do it. You just run the following command:
EDIT (Updated commands for clarification): sudo mount -t 9p volume name/location/on/guest so in my case it would have been: sudo mount -t 9p data /mnt/data and voila. Obviously you will need to have created your mount point before you mount the volume.
If you desire, you can have this volume be automatically mounted when your VM boots if you add the command to your /etc/fstab/
EDIT2 (I figured I should post my /etc/fstab entry as an example, more details can be found here on this: data /mnt/data 9p -t defaults,cache=mmap,msize=512000 0 0
Now I can finally setup my Freenas Corral box properly. The docker containers/templates in the current release are horrible. The nexcloud one uses sqlite (yuck). I have yet to find any good documentation on using the docker containers either.
For now I am simply going to create separate VMs for all of my different web-services for now. I really hope the freenas folks add these mounting instructions for 9p volumes to their wiki soon.
Swapping out memory isn't something bad, as long as it happens in "healthy amounts". The system will always evict/swap out memory eventually, when it feels like it hasn't been used in a long time, even if the RAM isn't exhausted yet. And as long as the system doesn't swap out constantly, the effort of swapping memory might not be worth the extra cost of adding hardware.
You should also have an understanding of the two different kinds of memory a system keeps in RAM. Here is an interesting post about how RAM and swap work together:
On thing to remember with ZFS is, that ZFS loves to have a lot of RAM. The FreeNAS-Team recommends about 1 GB of RAM per TB of storage as a minimum. By that rule, you already use up 10 of your 16GB, which leaves 6GB for FreeNAS itself and all your jails. Depending on what kind of jails you run, this might not be enough. If you can afford it and your system is compatible, you might be better off upgrading to 32GB of RAM instead of adding a SSD.
I just found a new quirk - the UPS service does obviously not mind the shutdown timer - as soon as the APC UPS triggers its event the system shuts down despite actually having a 10min timer set in the GUI - before the update that did work as expected.