Poor NFS performance on ESXi Host

I have a server set up running esxi connected to a NFS share. The server that is running the NFS share is a VM running on the esxi host with drives passed through directly. I seem to have very poor performance when using the NFS share for VMs. When I have a VM running off the NFS share they have issues and are noticeably slow at loading anything even simple tasks. When testing the performance of the virtual disk on the VMs running on the datastore it only gets maximum of 30MB/s and often drops even lower than that and appears to have either latency or connection issues. I have tested the drives locally on the NFS server and they perform as expected. The samba share I have on the same storage can get around 100MB/s and the exact same NFS share loaded on another linux machine gets 60-70 MB/s. Is there something I can try to improve the performance? How could I test the NFS speed directly on the esxi host too see if that is the problem?

IIRC it’ becaouse of sync writes. Check this post on TrueNAS for more details.

I agree with @vivante , check if sync is causing the problem. This can be turned off globally on the NFS server or you can mount the shares in async. Sync is useful and the reason why we love ZFS and SLOGs, because you can keep that god aweful random sync write “performance” under control without dumping the feature.

But if you aren’t able to compensate for sync write drawbacks, turning it off (via client, server or filesystem) and disabling this integrity feature to gain reasonable performance, might be the better solution.

Besides the sync issue…

Virtual disks are always slower than bare metal storage. Depending on the format and recordsize, qcow2, VMDK or doing everything via iSCSI can result in different results. I settled with iSCSI shares for my server for pretty much anything and it works well for me. But NFS vs. iSCSI or qcow2 vs VMDK is a thing you often test yourself and see what’s more comfortable for you.