Getting terrible zfs performance with smb transfer

I am not sure which category this falls, whether hdd or software or networking.

My setup is a vm with usb enclosure (6hdds) in raid z1 and from that smb transfers to windows. I see as little as 1MB/s up to 50 depending on time of day. I am not sure why it is incredibly slow. My other zfs arrays are not usb, so it being usb is a hint. But I am not sure how to confirm the issue and if there is a fix without doing away with the usb enclosure.

Does the speed issue occur for local transfers?

Are you possibly using SMR drives without realising it? SMR drives do not like ZFS or high speeds.

Just something I forgot to mention. It is in a proxmox vm. I will test a local transfer.

I am not seeing fast but reasonable speeds. 100MB/s with local write to another drive and 300MB/s write back. So it is working. It has to do with smb I think.

I have done iperf3 and noticed I am getting 287Mbits/sec going to the vm and going from vm to desktop im getting50kb/s. I have tried a bridged nic and a usb nic. Both show about 100 to 300 mbps from vm to desktop and 1gbp+ going from desktop to vm.

That’s … weird. Just double check all the links are coming up as gigabit with ethtool.

Did you use default SMB settings, or did you copy them from a random online article. The best practises of SMB options has changed a lot, so stick with defaults where possible.

Okay I moved the usb enclosure to my truenas instance and it is hitting 300MB/s. Which is what I expect, only thing isnt showing data of other disks. Only shows one disk.

It was not just smb. iperf was showing extremely slow speeds. Apparently a lot of people are having issues with newest proxmox version and vm network bandwidth problems.

I remember that I had similar issues but I stumbled across this

And the relevant part would be about disk caching → Better Linux Disk Caching & Performance with vm.dirty_ratio

It seems that Proxmox or Debian is allowing a lot of caching to the RAM and if the bucket is full it takes ages to write stuff from RAM to disk and this causes pain.

May you could try following sysctl preferences during runtime

vm.dirty_background_ratio = 5
vm.dirty_ratio = 10

and verify if this would fasten up the SMB copy process.
If so then you may add it to your permanent sysctl.conf