Hello everyone!
Recently I built a computer for my 3D / VFX work and I wanted to make basically a small render farm inside a single computer. I have very simple setup.
- 1x 1TB nvme - Unraid array cache disk (VM Vdisks are located here)
- 1x 2TB nvme - Unnassigned disk (only disk inside Zpool “Tank”)
- 1x 5TB HDD - Unraid array disk (Will only be used as backup for 2TB nvme)
- Unraid with 2 VMs (1x Windows 10, 1x Ubuntu)
My initial thought is to have VMs on 1TB and data I’m working with on 2TB nvme with regular backups to HDD.
I created ZFS pool called “Tank” in unraid using ZFS plugin with single 2TB Nvme.
I’d like to access the 2TB nvme drive from both VM’s at the same time, with best performance possible, ideally using ZFS ARC to read directly from RAM.
Using br0
So far I tried to map disk with NFS (unsuccessfully). I was able to mount it with SMB but only got ~150MB/s read speed. With more tweaking (MTU setting to 9198 and (maybe) enabling RSS) I was able to get to ~500-600MB/s.
Using virbr0
I was able to get to ~300MB/s with MTU set to 1500, If I set it higher, I’m not able to access the share (keeps loading forever).
Ideally I’d like to see RAM speeds / latency with ARC, but would be lovely if at least a “nvme speed” would work.
So the question is, how could I set this up so it would work like a charm?
I don’t care if it’s via SMB or NFS or anything else, as long as I don’t have to omit Unraid or build a separate computer for storage.
Thank you all in advance.
Specs:
AMD TR 3970x (32 core)
256GB RAM
2x GPU passthrough to VMs
Here are some of my configs and settings:
smb-extra.conf (unraid SMB config, every commented line in this config was tested)
root@Tower:/boot/config# cat smb-extra.conf
#unassigned_devices_start
#Unassigned devices share includes
include = /tmp/unassigned.devices/smb-settings.conf
#unassigned_devices_end
[speedy]
path = /tank
browseable = yes
guest ok = yes
writeable = yes
create mask = 0775
directory mask = 0775
valid users = marek
[global]
read raw = Yes
write raw = Yes
#socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=131072 SO_SNDBUF=131072
#min receivefile size = 16384
use sendfile = true
#aio read size = 4096
#aio write size = 16384
server multi channel support = Yes
interfaces = "192.168.122.1;capability=RSS,speed=1000000000"
#log level = 3
root@Tower:/boot/config# zpool status
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
nvme1n1 ONLINE 0 0 0
errors: No known data errors