QEMU KVM with NFS share on TrueNAS + edit for host to guest communication

So this post is a follow up to QEMU / KVM and FreeNAS where I tried and failed to setup shared storage on my TrueNAS server for the QEMU KVM hypervisor.

I don’t know if documentation for this is poor or if I am just dense but after re-reading several sources I managed to put the steps together on what to do.

  1. Create new dataset

  1. Assign permissions

I might have been able to leave this as 747 rather than 777 but never mind.

  1. Create NFS share

This is the critical step which is missed. The ‘Mapall User’ has to be ‘root’ and the ‘Mapall Group’ has to be ‘wheel’

As the tip says “The specified permissions of that user (or group) are used by all clients.”

  1. Network Exported Directory

2021-06-22_21-31

Now in the virtual machine manager interface create a storage pool of the netfs type.

The target path is the mount point of the share on the hypervisor. The QEMU KVM daemon takes care of mounting this for you at system boot. You don’t need to edit your fstab file.

Host name is the name or IP of the TrueNAS server.

Source path is the path to the NFS share on the TrueNAS server.

  1. Done

2021-06-24_21-24

You can now create VM disks in the pool.

  1. Converting existing virtual machines.

One reason for this little adventure is to get aware from VirtualBox. While it is a perfectly good piece of software, Oracle can go f*** themselves.

If you need to convert your vbox files it is easy enough.

vboxmanage clonemedium --format RAW ‘/mnt/data/VBox/Windows 7/Windows 7.vdi’ ‘/mnt/data/KVM/Windows 7/Windows 7.img’

qemu-img convert -f raw ‘/mnt/data/KVM/Windows 7/Windows 7.img’ -O qcow2 ‘/mnt/data/KVM/Windows 7/Windows_7.qcow2’

Then just copy the .qcow2 file into your pool and attach it to a VM.

The next goal is to get host to guest communication working.

While I hope to have a hypervisor cluster in the future making this moot as I would access the VMs from my workstation it is needed while my workstation is the hypervisor.

EDIT

  1. Host to guest communication

I found a Reddit post that suggested that the reason host to guest communication does not work is because the traffic is leaving and returning on the same network port or to the same switch.

Well I can test this and yes it does solve the issue.

My PC communicates to a managed switch on a bonded pair of interfaces. I then setup the VMs to communicate to an unmanaged switch via a separate physical interface in my PC.

Virtual network

2021-06-26_10-48

VM NIC settings

2021-06-26_10-52

And I can ping, SSH and browse to the VM.

In my case I had to edit the name of the NIC inside the VM first but that was it.

1 Like