VM Isolation from the actual Host Network

I’m using an Unraid server and run several VM’s there. Now I would like to isolate some of those VM’s to deny access to the rest of the network. My scenario is like this: Send a WoL to the VM, connect to trough VPN to the network and RDP to the machine. While on RDP you have admin rights, but you must not be able to access the local network - accessing the internet or even using another VPN should still be possible.

I’m running OpenVPN and WireGuard (which ever is easier). As for routing I use a Netgear R8500 - however I would like to use a RUTX09 as a Modem for the Netgear in Future.

I would like to achieve this isolation on network level, not via permissions.

I know there is a million ways to do this, but I’m looking for the “simple and robust” one :slight_smile:

P.S: Additional Hardware can be purchased if required.

You could pass network cards to these VMs and then use VLANs. I’m not sure about the whole WoL strategy but otherwise seems fairly straightforward. Do you have a managed switch or a way to create VLANs otherwise?

The Router supports vlans. I only have one NIC right now, but as far as I can see, I can setup routings on the server as virtual networks. These can be assigned to the VMs as NICs. Now… how do I configure that stuff? And how to wire this up with VPN?
WoL is supported and pretty easy.

You don’t need to pass a physical NIC. All you need to do is create a virtual bridge interface and then tell your VMs to attach to that bridge. That way you could tag your traffic from the vmbr0 device with your vlan strategy.

TBH not sure how to do that with unRAID as I just use regular Linux for this purpose.

This is what I do for my homelab. Although, I use a failover bond for redundendancy in addition to the above.

This would be much easier to setup if you had a second nic on the machine, so that while you’re configuring stuff you don’t loose access.

Your rutx09 is basically just openwrt, in network>vlan you can configure what VLANs you want on which router port. And whether you want them tagged or untagged.
It’s failover (based on mwan3) is unlikely to work without it doing routing.

Afaik, the R8500 doesn’t support user configurable VLANs out of the box, unless you put openwrt on it. And then you could plug in an LTE stick and don’t need the rutx09 even.

VPNs such as OpenVPN and Wireguard generally work by tunneling network connections through udp (openvpn can do tcp as well), and if a VM is where you want to run the VPN endpoint, that means you’ll need to forward those udp or tcp ports to that VM (if a VM is in a VLAN, your router will need to have that VLAN configured). That VM will then need network access to stuff you want to provide through the VPN.

If you have a usb/ethernet adapter to add as a second network interface to unraid while you mess with VLANs on your primary interface, that would probably help you “not cut the branch you’re sitting on” as you try and configure VLANs on both sides of the link (your router, and your unraid server)

Please correct me if I’m wrong or misunderstanding things;

  • open wrt does not support the R8500 - this is what their site claims
  • I run WireGurad as part of unraid and openvpn server in a docker
  • I can use a virtual bridge network bridge for the VM, which will create a seperate network with no access to the regular one
  • I could use a wireguard tunnel and create a static route to the virtual network and be fine…?
  • I don’t use the r8500 wifi, but it seem way more powerfull then the rutx09 - would you still recommend to swap them? (I have a duplex gbit connection and never less than ~20 devices on the network)

Damn, I had it mistaken for x4s which is well supported (x4s aka R7800 just checked; R8500 is X8). Then again if you don’t user its wifi, its pretty much useless/unnecessary to you.

Rutx09 says “Quad Core ARM Cortex A7 717 MHz CPU”, that “smells” like a Qualcomm ipq4019 to me, which is a BCM47094 “Dual 1.4GHz Cortex A9 CPU”. Both are ~2010 era 32bit ARM designs with a similar overall feature set that made their way into Qualcomm/Broadcom SoC alongside various network specific stuff.

Either would do basic routing well, rutx09 just happens to have better software on it that lets you control more routing aspects - if you don’t lose wifi on x8, you don’t really lose anything by removing it from your network… If anything, you get to simplify things dramatically.


You can have more than one network interface on a container or a VM — that’s how folks get to run routers in VMs. These bridges can, but don’t have to have physical interfaces connected to them at all. Even the host os, doesn’t have to have a network configuration setup for its bridge interface.

Think of a linux bridge as a virtual L2 managed switch with more or less infinite ports, by default you get one wire out of it going to the host is networking stack, and you get to configure the host os and that one bridge port how you want. You also get to plug in physical interfaces if you want or many other kinds of virtual interfaces. For example you can create a veth pair of virtual interfaces - veth behaves as a virtual ethernet cable. You put one end in a container, and another end to a bridge. Or you could create a tap interface, put it into a bridge and give its details to qemu/kvm/vhost-net and have the virtualization software use it as a backend for one of the virtual network interfaces in that VM, whether or not there’s a physical network on that bridge, a host, or any other network interfaces on that bridge, and how many bridges overall you have is entirely up to you. Each bridge is just a virtual network switch, you plug in what you want.