Unraid RDMA inside Windows 10 VM

Hello everyone!
Recently I built a computer for my 3D / VFX work and I wanted to make basically a small render farm inside a single computer. I have very simple setup.

  • 1x 1TB nvme - Unraid array cache disk (VM Vdisks are located here)
  • 1x 2TB nvme - Unnassigned disk (only disk inside Zpool “Tank”)
  • 1x 5TB HDD - Unraid array disk (Will only be used as backup for 2TB nvme)
  • Unraid with 2 VMs (1x Windows 10, 1x Ubuntu)

My initial thought is to have VMs on 1TB and data I’m working with on 2TB nvme with regular backups to HDD.
I created ZFS pool called “Tank” in unraid using ZFS plugin with single 2TB Nvme.

I’d like to access the 2TB nvme drive from both VM’s at the same time, with best performance possible, ideally using ZFS ARC to read directly from RAM.

Using br0
So far I tried to map disk with NFS (unsuccessfully). I was able to mount it with SMB but only got ~150MB/s read speed. With more tweaking (MTU setting to 9198 and (maybe) enabling RSS) I was able to get to ~500-600MB/s.

Using virbr0
I was able to get to ~300MB/s with MTU set to 1500, If I set it higher, I’m not able to access the share (keeps loading forever).

Ideally I’d like to see RAM speeds / latency with ARC, but would be lovely if at least a “nvme speed” would work.

So the question is, how could I set this up so it would work like a charm?
I don’t care if it’s via SMB or NFS or anything else, as long as I don’t have to omit Unraid or build a separate computer for storage.

Thank you all in advance.

Specs:
AMD TR 3970x (32 core)
256GB RAM
2x GPU passthrough to VMs


Here are some of my configs and settings:

smb-extra.conf (unraid SMB config, every commented line in this config was tested)

root@Tower:/boot/config# cat smb-extra.conf 
#unassigned_devices_start
#Unassigned devices share includes
   include = /tmp/unassigned.devices/smb-settings.conf
#unassigned_devices_end

[speedy]                                                                                                                                                                 
path = /tank
browseable = yes                                                                                                                                                       
guest ok = yes                                                                                                                                                         
writeable = yes
create mask = 0775
directory mask = 0775
valid users = marek

[global]
read raw = Yes
write raw = Yes
#socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=131072 SO_SNDBUF=131072
#min receivefile size = 16384
use sendfile = true
#aio read size = 4096
#aio write size = 16384
server multi channel support = Yes
interfaces = "192.168.122.1;capability=RSS,speed=1000000000"
#log level = 3
root@Tower:/boot/config# zpool status
  pool: tank
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          nvme1n1   ONLINE       0     0     0

errors: No known data errors

Hmm, with SMB it’s a bit finnicky, documentation isn’t exactly plentiful, but this Reddit post might help if you hadn’t come across it yet. Going to have to sift through the comments and experiment a bit with the settings though.

For NFS it should kinda work out-of-the-box but you do need to have the rdma services running, not sure how that works with Ethernet as I only ever used it on physical Infiniband hardware. But if requirements are met then you can just set up the share like this in fstab:

192.168.x.x:/mnt/backup             /mnt/backup              nfs             rdma,port=20049,mountvers=3,rw  0 0

Never had much luck with NFSv4 though, afaiu the support isn’t that great. Didn’t really have a need for v4 features so didn’t bother digging much deeper at the time.

Oh yes, and for Windows 10, if you don’t have the “Pro for Workstation” you can forget about RDMA, it’s not available in “regular” Windows 10, or Windows 10 Pro.

Thank you marelooke for reply.
I was reading through the reddit post and found a better MTU setting 9014 which boosted it to ~400-600MB/s.
Also I didn’t know about Win 10 pro / workstation samba-direct limitation. I upgraded to Win 10 pro for Workstation. Still didn’t solve the samba-direct I guess.

I would rather have 2x Windows than 2x Ubuntu, so making this work on Windows is a priority for me.

update:
I found out that even copying from 1TB to 1TB (same drive) was limited to around 700MB/s so I created new config in Unraid and 1TB cache changed to Unnassigned drive, then run VMs from that unnassigned drive instead. Now I’m getting around 1GB/s transfer, but If I copy files via SMB, still got stuck at max. 800MB/s.