Hey people, I’ve got an unusual problem that I can’t seem to figure out. In short, the upload speed of my VM is very poor, but to the internet only.
My setup is as follows:
Ubuntu Server 18.04 VM (3 cores, 10GiB RAM)
Freenas 11.3-U3.2 Host (i5 2500, 20GiB RAM)
Netgear R7000 with ddwrt used as Switch and AP (Server is wired)
AVM FRITZ!Box 6490 Cable Modem & Router (300/20 Internet plan)
I have a Plex Server running in Docker on the VM and noticed that I couldn’t get more than ~3Mbit/s upload recently without frequent buffering. As I used to get my full 20Mbit/s I started investigating. After using speedtest-cli on multiple devices in my home network I have discovered that all devices but the VM get their full 300/20 bandwidth. The VM gets the full download, but only ~3.5Mbit/s upload. Curiously enough iperf3 to multiple machines in the local network gives the expected 900+ Mbit/s in both directions.
To resolve this issue I have tried restarting the VM, the host, the AP and the modem. I tried switching the virtual network adapter, changing the MAC address, updating the VMs kernel. Nothing changed anything.
This is all very weird, as it used to work just fine.
Does anyone have some more ideas on how to pinpoint or even solve the issue?
Plex really was just the indicator that something was off. All the numbers are from speedtest-cli and iperf.
But the Plex Dashboard tells the same story. It shows this while buffering for a second every 10 seconds.
Ok, so it’s definitely the VM and not the container right? Confirmed that speedtest-cli on the VM is giving you the 3/3.5 mb/s up, and other devices are getting full 20 also with speedtest-cli, targeting the same server?
Mostly because when I got into this I didn’t know either and I thought that if I learn how one of them works, Docker is the more universal one. I did think about switching to the plugin though, with the hope that I would be able to gain GPU accelerated encoding with it. I haven’t checked on that in a while. Is it still a pita and not really supported or has it gotten easier?
The plugin is officially supported by iXSystems, isn’t it? If I had to guess I’d say that’s usually the way this is done, but I’m not sure offhand what the support for hardware acceleration looks like.
I’d be interested to see what you find though since I’ve also seen lower transfer rates with bhyve compared to jails, but it’s not as extreme as what you’re seeing, so I haven’t looked into it.
I use Emby, so idk about the plex plugin and gpu. Are you able to passthrough the GPU to the ubuntu vm? I know they were going to add pcie passthrough to the FreeNAS interface, but I haven’t checked on it in a while.
Ok, I’ve gotten one step further. I have now booted the VM from a Linux Mint 20 .iso, did the same test, et voilá 20.57Mbit/s. So this seems to have shifted from a vm/bhyve issue to a guest os issue. I still have no clue why though…
ok, wow, I didn’t expect this to make a difference, but on the other hand I didn’t expect any of this anyway.
I just tested Ubuntu Server 18.04, Ubuntu Server 20.04, Linux Mint 19.3 (based on Ubuntu 18.04), and Linux Mint 20. The 18.04 OSs all have the same poor upload, but perfectly fine local performance. Both 20.04 OSs worked as they should. Though this isn’t really a fix and still more of a workaround I am going to dist-upgrade a clone of my VM and see what happens.
vnet jails and bhyve VMs with a virtio interface (never use e1000) have to go through a bridge which can be a limiting factor. Non-vnet jails can directly use the host interface with no overhead, and VMs with a dedicated NIC or virtual function on a NIC configured for pci-passthrough do not have that limitation (but in exchange have to pass through the external network to access network services on the host and vice-versa).