Setting up a network bridge between two Proxmox VM's

I have a server running Proxmox 8.1 where I’m hosting two VM’s - one TrueNAS SCALE instance and one Ubuntu server instance.
They both currently use the vmbr0 bridge that Proxmox creates by default to allow VM’s to connect to the in-use NIC, but they seem to be capped at the physical speed of the NIC which is 10Gbps.
I’m trying to set up a new bridge between these two VM’s so that they don’t have to be tied to the physical NIC and hopefully get higher bandwidth between them.

First, is getting higher speeds through a bridge without an assigned NIC possible or am I wasting my time?
And second, how would I go about doing this? I’m having trouble finding info on what to do in terms of assigning static IP addresses to the bridge interfaces. This is what I’ve tried so far:

  1. Create a new bridge interface in Proxmox called vmbr1 without any settings configured
  2. Applied the new network settings, rebooted Proxmox just to be safe
  3. Add the new vmbr1 bridge to both VM’s
  4. In TrueNAS SCALE, I added a new bridge and selected vmbr1 as the bridge member (in my case vmbr1 appears as ens19 / enp0s19)
  5. Tried assigning a static IP to the Ubuntu server through nmcli and ip addr add

The last two steps are where things seem to not be working as neither VM can see the other. Not sure if I need to configure static IP’s for the VM’s within Proxmox first or if there’s a better way to assign my bridge interface a static IP on the VM’s themselves

Direct communication from VM to VM shouldn’t be bound to bridge speed. This is just a value that is copied over from one of the active ports.

If you put your VMs only on the vmbr1 bridge you effectively cut them off from the rest of your network. You’d better assign them a secondary interface on that vmbr1 bridge and configure static IP addresses in a different subnet (if you want to pursue this testing)

You technically don’t even need a second bridge to make this work. Just create second virtual network card for both virtual machines and then assign VLAN tag to, let’s say, 10. Yeah, both VMs will need a static ip assigned to them on the secondary NIC inside virtual machines.

Your maximum transfer speed between two nics will depend on how powerful your CPU is I believe. On my Proxmox setup I was able to get around 4-5 Gbytes/s in iperf.

Something like this. As a bonus you can access this VLAN through your physical port and via smart switch if you wanted to.

I spent the past month running different tests such as:

  • Reading/Writing between two VM’s on the same server
  • Reading/Writing between the storage and an external machine over 10Gbps LAN
  • iperf3
  • Using different storage types (ZFS RAIDz1, RAIDz2, striped)
  • Enabling/disabling ZFS features (L2ARC, SLOG, async/sync writes, etc.)

In the end, the bottleneck seems to be ZFS itself regardless of what optimizations are done. In a best case scenario where I had my six WD SN850X 4TB drives in a ZFS stripe, L2ARC disabled, and only allowing sync writes, I could not exceed the peak of 10Gbps.

I know it’s ZFS (or possibly TrueNAS SCALE or SMB) as I created an Ubuntu VM with a non-ZFS stripe setup and I reached about 12Gbps.

In the end, I don’t mind being capped at 10Gbps, just the peace of mind of knowing the root cause is enough for me. Maybe as more updates come to TrueNAS SCALE and OpenZFS versions that the performance will improve.