Truenas Scale network security

I have made myself a Truenas Scale server. I am happy with its NAS capabilities. And as it markets itself as “hyper converged infra” I expected the hosting potential to be good.

However I am running into an issue. Few but select apps I want to run “totally isolated” from Truenas Scale. As in I want it to use a specific network interface, not have root and not be able to communicate with the host through networking/localhost.

However assigning a nic to apps doesn’t actually seem to separate the internal networking on scale. That’s just seems to be to expose your services on that specific network. Third party load balancers etc too are all for ingress not egress.

Am I looking at this wrong? Would assigning a nic to a vm separate communication with the host? Or should I abandon truenas.

I would be interested in this too.

Currently everything I use runs off of a Linux bridge br0. In some cases this shares the physical adapter logically with the host in a way that allows host communication between containers, VMs, and host.

I would assume if you would like to isolate it we would need something like openvswitch to accomplish this?

1 Like

I had not heard about OVS before but that does seem like a solution. However if we do that manually it’ll not survive OS updates afaik.

I wonder why something like this isn’t in the system yet. I have created an issue on the jira board that IX systems has.
https://ixsystems.atlassian.net/browse/NAS-122586?atlOrigin=eyJpIjoiMzBjMWQxMDMwMDFhNDZhNThhOGY5MGNhOGE4ZWVhNzUiLCJwIjoiaiJ9

OVS is certainly fantastic and I think would solve this.

But I wanted to chime in here for another reason entirely actually, basically just to say be careful using SCALE for actual HCI usage and VMs and stuff. While it’s a ton of fun to play around with, IMO it’s basically still a beta product and I actually moved all my NASes back to CORE for the time being (love how easy it was to go back and forth though). So don’t do anything too critical on it lol, the HCI portion of it really is still pretty early at this point.

3 Likes

Same here. Core is a bank you can’t break. And I trust Proxmox for all my virtualization needs. It just got the new 8.0 upgrade with 6.2 Kernel. Configuring network, bridges, VLANs, etc. is just great in Proxmox.

3 Likes

Yeah for sure, I don’t personally use ProxMox anymore but it’s a very solid option.

100% with you though, CORE is just so damn stable and for something as important as a NAS that’s super critical.

I am not too worried about stability. Moreso about security. But yea I’ve been looking at alternatives for that for now. And I may move back to scale once it’s all good.

@Exard3k Do you virtualize your NAS in proxmox?

I do. TrueNAS Core. Been running this for 1.5 years now (with some 2 week adventure in TrueNAS Scale). Works like a charm. I will eventually move on to a cluster and will keep Proxmox and Core on that machine to act as a cluster node and my ZFS pool will move to bare-metal Proxmox. Proxmox itself has all the ZFS you want, it just lacks filesharing and a GUI. But I’m confident in running these things via Containers and CLI now.

1 Like

I really like the direction TrueNas Scale is going at the moment. However, I understand the need for advanced networking features.

I’ve been running a weird combination of Ubuntu 18.04, zfs, lxd, kvm, and docker for a while now. When I built it I thought from the top down what I wanted was:

  1. a file server with zfs datasets and snapshots
  2. a virtualization platform for application deployment testing
  3. run docker in those virtual machines
  4. pass datasets through directly to the VMs through KVM instead of accessing them over an internal network bridge
  5. access those same datasets from a windows PC via SMB.

To do this, I ended up embracing LXD instead of virtualizating with KVM. I can also use Open VSwitch instead of Linux bridge with the LXD containers to isolate them from each other and from the host while giving them direct access to datasets on the host. I also capture the LXD containers with my regular snapshots as LXD uses zfs as it’s storage backing.

I’ve been using scale quite a bit and honestly I’m moving towards running two servers (one on truenas and one on proxmox). I’m not exactly sold on using truecharts yet and the built in virtualization leaves a lot to be desired at the moment.

Scale is really great but I may in the end go back to Ubuntu for my use case. I look forward to seeing the changes they implement in scale in the near future!

3 Likes

I’ve moved my scale instance to Proxmox last night. The performance was identical and used less power on idle? That last bit surprised me to say the least.

For now I am assuming Proxmox with virtualized truenas and sata passthrough + dedicated docker VM is more secure than Truenas with a docker VM with a passed through NIC. But finding actual confirmation for this, well idk haven’t found that yet ;).

The management of such a setup is what turns me away from such a setup.

I have looked at LXD, but I think I prefer Docker/Podman/k8s on top of a VM for security reasons. Just a lil less likely to compromise the host.

Hence why I decided to virtualize Truenas now

What hardware are you using for your proxmox host?

I hope you don’t mind me asking.

CPU: amd r5 5600
RAM: Kingston KSM32SED8/32MF
Motherboard: Asrock Rack X570D4I-2T
Boot disk/disk for vm’s:
WD Green SN350 (TLC) 240GB

I suppose these are the relevant parts for Proxmox. I do not really care about the data in my VM virtual disks. It’s all backed up data or just configuration that’s stored elsewhere.

But if I did care more I would get a second SSD for them.

Cool. There are a lot of trade-offs in hardware selection. You mentioned running truenas in a VM and that you were using the same amount of power.

I too am looking for a good solution on power consumption. I also want to be able to pass through devices for hardware acceleration. I’ve been able to pass through devices without the need of IOMMU groups with LXD on low power hardware for a while however the shared kernel nature of unprivileged containers definitely does make that less secure.

I just want to have my cake and eat it too? I guess that’s not realistic.

That’s probably why I’m moving towards a dual server system with proxmox on one host and truenas on the other.

1 Like

I’ve had a lengthy discussion with a friend who works in the security industry.

She alleges that for my scenario VM + Containers in them is more than secure enough. At least if I also keep everything up to date, remove privileges where possible and other general security practices. At least for my specific scenario: Mostly a NAS but occasionally hosting a development API’s or web service. Maybe a game server if I feel like it ;).

I am passing through my sata controllers to my Truenas vm. And it uses 40 watts idle. While not super low I am fine with this amount.

1 Like

One thing to keep in mind is that security advice needs to be tailored to you! …

Keeping things easily manageable so you can update stuff and not expose everything to everything by default because alternatives are hard to manage will do you good.

If you catch yourself going lazy, … come back let us know.

Look into Proxmox too before fully committing (… unless you already have thrown 20T of data into your TrueNAS) , already went there, cool!

Don’t worry there’s no data inside of my Truenas just yet. Although I do trust ZFS and the PCIe passthrough of the sata controller enough that it wont break… I’ll still not take that risk.

I am currently just toying with Proxmox. Currently struggling with sr-iov of my x550 nic.

Sidenote on runtime security btw: software to manage that is rather expensive. At least when they support containers. I’m talking stuff like Bitdefender Gravity zone or Sysdig. Wish there were cheaper options

edit: sr-iov works perfectly now :wink: