[SOLVED] How to distribute NFS from KVM host to guests?


In case you are running into the same problems as me, here is what helped me out in the end.

The goal: Configure a Linux KVM host to allow NFS traffic over an isolated, virtual network.

1. Creating the network

  1. Start Virt-Managerand open the network settings
  2. Create a new virtual network:
    • Name: isonet0
    • Autostart on boot
    • Network
    • DHCP range: -
    • IPv6 disabled
  3. Check ip link, it should look like this:
    • Ignore virbr0-nic

3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:b0:52:0e brd ff:ff:ff:ff:ff:ff
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:b0:52:0e brd ff:ff:ff:ff:ff:ff

2. Firewall settings


I am using firewalld for configuration. Please refer to your firewall's documentation if you are using a different one.

  1. Open the GUI of firewalld ( i.e. type Firewall in Gnome)
  2. Add a new interface and name it like the network we previously created
    • In this example: virbr0
  3. Apply an existing zone to it or create a new one
    • Make sure it's different from your physical network (public)
  4. In that zone, allow all TCP and UDP traffic
  5. Save your settings permanently so they don't reset on boot

3. Enable Jumbo Frames (Optional)


This will reduce overhead and increase NFS's overall performance. Only activate this if you know that all attached guests will operate with an MTU of 9000! This is the the case as long as you are using VirtIO driver and have no physical device bridged to this network (or if you explicitly know your NIC supports jumbo frames).

  1. In Virt-Manager, stop isonet0
  2. In a terminal, type sudo EDITOR=nano virsh net-edit isonet0
    • Add <mtu size='9000'/>
    • Save and close
  3. Restart isonet0
  4. (Windows guest only!) Open the network settings
    • Open the Device settings of the NIC attached to the isolated network
    • Adjust the MTU value to 9000
    • Save and close all windows
  5. To verify, open cmd.exe
    • Run ping -f -l 8972 (9000 - 28 (TCP/IP header))
    • If ping fails with packet loss, try restarting your Windows guest

  <bridge name='virbr0' stp='on' delay='0'/>
  <mtu size='9000'/>
  <mac address='52:54:00:b0:52:0e'/>
  <domain name='isonet0'/>
  <ip address='' netmask=''>
      <range start='' end=''/>
  • Terminal, ip link:

3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 9000 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:b0:52:0e brd ff:ff:ff:ff:ff:ff
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 9000 qdisc fq_codel master virbr0 state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:b0:52:0e brd ff:ff:ff:ff:ff:ff

Original post:

I am stuck getting NFS to work for KVM guests. They are provided with 2 two NICs, one being a macvtap directly attached to my Intel ethernet port, the second being a virtual bridge for isolated communication over which I want to share files over NFS.

My networking expertise in Linux is rather fresh and I can't get a working connection between my host OS and the Windows 10 guest I would like to share data with.

The virtual bridge is on the network, where I can ping (gateway). However I cannot get a response when pinging which is my Windows guest. There is no DHCP running, the guest recognizes the network and idicates "no internet access" (obviously).

NFS is working from what I can tell the exports of showmount -e show /home * which seems correct. Connecting on the host by using nfs://localhost/home works fine.

So, the isolated network virbr0 I created with Virt-Manager shows up when typing ip link in terminal, i can ping the gateway but none of the guests connecting to it. Am I missing anything in my configuration, maybe firewall settings? I need your help.

The Libvirt documentation tells that the isolated network is capable of communicating to both the guests and the host.

1 Like

I'll write a more detailed one in a bit, but I think the issue is that your host also needs to be connected to, and have connectivity with, that same bridge.

Windows: ipconfig /all

Linux1: ifconfig
Linux2: ip link

I am assuming the real network is or something and the virtual one will be Check to make sure each relevant NIC has a the correct statistically assigned address. After that, as long as both the guest and host are connected to the same switch (isolated virtual bridge), ping should work after disabling firewalls.

Also: No "gateway" should be involved in your configuration, and pinging a "gateway" isn't going to let you ping whatever is behind it anyway.

  • The physical one is and uses DHCP
    • The host lease currently is
  • The virtual one is, no DHCP, no routing or NAT
    • The Windows guest is connected on
    • can be pinged from both host and guest side, no idea what it's good for though.

Wouldn't I expose the physical network to the virtual one then? I would have liked to avoid that.

Gathered the output for both machines in case you need it. The Windows one is in German, I hope this is okay:

ipconfig /all

Ethernet-Adapter Ethernet:

Verbindungsspezifisches DNS-Suffix: lan
Beschreibung. . . . . . . . . . . : Red Hat VirtIO Ethernet Adapter
Physische Adresse . . . . . . . . : 52-54-00-26-88-05
DHCP aktiviert. . . . . . . . . . : Ja
Autokonfiguration aktiviert . . . : Ja
IPv6-Adresse. . . . . . . . . . . : fd50:6a20:212::2be(Bevorzugt)
Lease erhalten. . . . . . . . . . : Dienstag, 18. Juli 2017 16:56:44
Lease läuft ab. . . . . . . . . . : Freitag, 24. August 2153 23:28:49
IPv6-Adresse. . . . . . . . . . . : fd50:6a20:212:0:702b:7693:58d2:cbee(Bevorzugt)
Temporäre IPv6-Adresse. . . . . . : fd50:6a20:212:0:2cd0:c4d2:a7cb:a6d6(Bevorzugt)
Verbindungslokale IPv6-Adresse . : fe80::702b:7693:58d2:cbee%7(Bevorzugt)
IPv4-Adresse . . . . . . . . . . :
Subnetzmaske . . . . . . . . . . :
Lease erhalten. . . . . . . . . . : Dienstag, 18. Juli 2017 16:56:46
Lease läuft ab. . . . . . . . . . : Mittwoch, 19. Juli 2017 04:56:45
Standardgateway . . . . . . . . . :
DHCP-Server . . . . . . . . . . . :
DHCPv6-IAID . . . . . . . . . . . : 55727104
DHCPv6-Client-DUID. . . . . . . . : 00-01-00-01-20-FD-AD-4A-52-54-00-26-88-05
DNS-Server . . . . . . . . . . . : fd50:6a20:212::1
NetBIOS ĂĽber TCP/IP . . . . . . . : Aktiviert

Ethernet-Adapter Storage:

Verbindungsspezifisches DNS-Suffix:
Beschreibung. . . . . . . . . . . : Red Hat VirtIO Ethernet Adapter #2
Physische Adresse . . . . . . . . : 52-54-00-09-02-54
DHCP aktiviert. . . . . . . . . . : Nein
Autokonfiguration aktiviert . . . : Ja
IPv6-Adresse. . . . . . . . . . . : fd80:128::2(Bevorzugt)
IPv6-Adresse. . . . . . . . . . . : fd80:128::b4c2:19b1:c9d6:7887(Bevorzugt)
Temporäre IPv6-Adresse. . . . . . : fd80:128::6427:25f1:42f4:e709(Bevorzugt)
Verbindungslokale IPv6-Adresse . : fe80::b4c2:19b1:c9d6:7887%9(Bevorzugt)
IPv4-Adresse . . . . . . . . . . :
Subnetzmaske . . . . . . . . . . :
Standardgateway . . . . . . . . . : fd80:128::1
DHCPv6-IAID . . . . . . . . . . . : 273830912
DHCPv6-Client-DUID. . . . . . . . : 00-01-00-01-20-FD-AD-4A-52-54-00-26-88-05
DNS-Server . . . . . . . . . . . : fd80:128::1
NetBIOS ĂĽber TCP/IP . . . . . . . : Aktiviert

Here is the Linux one:

ip addr

1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever

2: enp0s31f6: mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 2c:56:dc:3a:8f:42 brd ff:ff:ff:ff:ff:ff
inet brd scope global dynamic enp0s31f6
valid_lft 22824sec preferred_lft 22824sec
inet6 fd50:6a20:212::e8d/128 scope global
valid_lft forever preferred_lft forever
inet6 fd50:6a20:212:0:a0df:e132:9f45:4f17/64 scope global noprefixroute
valid_lft forever preferred_lft forever
inet6 fe80::172:45b7:43bc:aef4/64 scope link
valid_lft forever preferred_lft forever

3: virbr0: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:32:dd:49 brd ff:ff:ff:ff:ff:ff
inet brd scope global virbr0
valid_lft forever preferred_lft forever
inet6 fd80:128::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe32:dd49/64 scope link
valid_lft forever preferred_lft forever

4: virbr0-nic: mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:32:dd:49 brd ff:ff:ff:ff:ff:ff

5: [email protected]: mtu 1500 qdisc fq_codel state UP group default qlen 500
link/ether 52:54:00:26:88:05 brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:fe26:8805/64 scope link
valid_lft forever preferred_lft forever

6: vnet0: mtu 1500 qdisc fq_codel master virbr0 state UNKNOWN group default qlen 1000
link/ether fe:54:00:09:02:54 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe09:254/64 scope link
valid_lft forever preferred_lft forever

  • enp0s31f6 is the physical NIC in my system
  • vnet0 appears upon booting the Windows guest as well as [email protected] so I assume these are the NICs that connect to their respective networks
  • I am kinda confused about the existence of virbr0-nic - It appears upon starting the virtual switch.

So the issue is that a Windows guest (KVM) cannot connect to your host Linux over NFS. The Linux host has an NFS share.

For that to work, things need to look like this:

So: NFS_share - Linux - switch (virtual bridge) - Windows - NFS_client

So... why is a "gateway" involved in your config? If the point is to isolate sharing from the main network, the virtual bridge network should not have any "gateway."

The documentation talks about bridging the virtual switch to VMs by using DHCP and NAT. In that scenario, the VMs would be able to communicate through the virtual bridge since the "gateway" would be a virtual router operating NAT. If that is not intended, then NAT should not be involved, gateways should not be involved and of course dhcp/dns relays should not be involved.

It might be simpler to destroy the existing bridge, and create a new one if it was not created properly, rather than trying to fix config.

Docs: http://wiki.libvirt.org/page/TaskIsolatedNetworkSetupVirtManager

The Windows guest should pick up an address on one of it's NICs from your DHCP server and ping your host over the real network.

At this point, it is important to test file sharing over NFS. If NFS file sharing does not work on the real network, then fix it before trying to get it to work over a virtual bridge.

Then ping Linux from Windows: ping
Then ping Windows from Linux: ping + CTRL + c

Adjust your addresses appropriately.

Once the guest (Windows) and host (Linux) are both connected to the correct isolated virtual bridge, it should simple to get them to poke at each other. Assign both the Linux-side NIC and Windows-NICs static addresses and disable firewalls temporarily.

Linux static IP config:
mv /etc/network/interfaces /etc/network/interfaces.bak
nano /etc/network/interfaces

auto lo
iface lo inet loopback

#My IP description
# IPv4 address
iface eth0 inet static

CTRL + o
CTRL + w

service networking restart

Windows static address guide

ipconfig /all, ip link to make sure the addressing scheme is not wrong.

Then ping Linux from Windows: ping
Then ping Windows from Linux: ping + CTRL + c


  • Windows, by default, blocks ICMP echo requests (ping). Allow them through the Windows firewall or, better, just disable it while you get the configs sorted out.

From your post:

Red Hat VirtIO Ethernet Adapter

Red Hat VirtIO Ethernet Adapter #2

Router: undefined (but probably

Address: none

[email protected]
Address: none

Address: none

virbr0 (switch/router/gateway)

So the solution to the problem should be obvious now by looking at the diagram I sent you and these addresses.

But first!

Remember that macvtap is a way to link a physical network adapter to a virtual bridge. It is not a NIC itself. [email protected] thus means that it is currently bridging to the enp0s31f6 adapter, which is your physical one.

virbr0 is obviously the brige itself, which is given an address by the KVM software in case you would like to configure it later to do NAT and DHCP and stuff.

Which means! that either virbr0-nic or vnet0 is the virtual network interface card that links Linux/NFS share/apps to that virtual switch. And... neither has an address. Maybe if one or the other had an address in the network, then it could communicate on that network. o_o!

1 Like

Update 19.07.17 - 0:13 CEST: I could confirm that NFS works on the physical network. It must be related to the virtual network or the firewall then.

Update 19.07.17 - 0:46 CEST: It's definately firewalld which is causing the trouble. On the Windows guest I can mount the NFS directory over without any more hassle.

Thanks for your help! Very detailed.

So far so good. Pings now work (which was firewall-related as you mentioned before), NFS is not yet working. It might be due to the firewall as well. and this virbr0-nic is still a mystery.
Keep reading if you are interested.

My progress...

I created a new isolated network:

  • Network "isonet0"
    • DHCP enabled
    • NAT / Forwarding disabled
  • Windows 10 guest
    • Address:
    • Netmask:
    • Def. Gateway:


  • Host --> Guest ( --- Successful (Windows Firewall disabled)
  • Guest --> Host? ( --- Successful

Output ip route:

default via dev enp0s31f6 proto static metric 100 dev enp0s31f6 proto kernel scope link src metric 100 dev virbr0 proto kernel scope link src

And this device...

4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:b0:52:0e brd ff:ff:ff:ff:ff:ff

I tried:

  • Setting a manual IP address to
    • was pingable on above address, otherwise no effect
    • Resets itself upon host/bridge restart
  • Changing state to "UP"
    • Device remained in "DOWN" state no matter what

So NFS works on the physical network as do pings.

Pings also work on the virtual network, but NFS does not. From this random post It looks like virbr0 is a reserved NIC created by KVM. I don't really get the point of it since I thought that was what macvtap0 was for, but w/e.

My recommendations are as follows:

  1. Create a new NIC as per that random post: modprobe vnic.
  2. Add it to the virtual bridge: brctl addif isonet0 vnic02.
  3. Assign it a static address as per nano /etc/network/interfaces or ip addr add dev vnic02 or similar.
  4. Get NFS to work using that address.
    1. Disable any firewall temporarily.
    2. Sort out and NFS software configuration issues.
    3. Reenable afterwards to sort out the Firewall issues separately.
  5. Make sure address assignment and new bridge persist across reboots.

That is about the edge of what I am familiar with. The rest is specific to KVM and how it handles it's NICs.

I made NFS to work and mount on boot on the Windows guest. Thanks for your help here!

I have made a new directory /home/public and bound my home folders in it (Pictures, Music, Videos, Downloads...):
mount --bind /home/karmek/x /home/public/x.

This is the NFS exports:


On Windows it's mounted at H:\. Permissions seem to work although I lost write permissions temporarily when mapping the multimedia folders. Just checking the respective box in NFS-Client settings is enough to restore it.

I can't hide certain files and folders like $RECYCLE.BIN and desktop.ini.
They appear on both host- and guest-side. Do you know a way to hide them?

Also I consider running Samba instead of NFS particularily for Windows. Could you give me some advice here what's faster and more convenient?

1 Like

Unfortunately, I do not know of a way, but! there was a thread on this a while back:

Note: I have not read this: https://forum.level1techs.com/t/hidden-folders-files-moving-from-linux-to-windows-or-how-to-change-dot-to-hidden-attribute/117220

NFS is faster and should be used over Samba.

That said, Samba is easier to set up and I use it over NFS. Samba is single threaded meaning it performs really poorly on higher bandwidth links (>100mbps) and low-end CPUs (like a RaspberryPi's ARM CPU or Intel's Atom series) when compared to NFS. That said, both SMB and NFS software performs terribly compared to Network-layer block transfer protocols like iSCSI or FCoE. SMB is also notorious for security vulnerabilities.

For slow links ~100mbps it should not matter, or if disc I/O limited. For anything faster, like involving faster HDDs, SSDs and Gigabit or higher networks, NFS is a better choice, albeit harder to configure.

1 Like

The powershell script they posted in that thread did not help unfortunately. Even right-clicking said files/folders and hitting the "hidden" checkbox has no effect on its visibility. It just gets ignored. It seems I have to live with it for now.
At least i can hide them on the host-side by adding them to a .hidden file in each folder (eventhough this appears on the guest storage then too ^^").

Very good, thanks! I gave iSCSI a short read. It's block device passthrough, right?

I am using ZFS on Linux under the hood so the OS resides in a zVol on SSD, other installed data on another zVol on HDD and backed by 50GB SSD cache. Both are attached via VirtIO.

NFS makes it easy to maintain all my files in a signle spot without needless copying. Secrurity-wise it should be fine as it's only served over the isolated, virtual network. Performance is good too from what I can tell after a brief test (10 Gbit/s + jumbo frames :slight_smile:).

Sigh... If I could only hide these Windows-specific files. But yeah, all my initial problems are resolved now. Thanks again @Peanut253!


I have added the solution to the OP so it will help someone in future.