How do i get gigabit speeds in a VM? (stuck at ~0.25Gbps)

Maybe this needs to be done via pcie pass-though, but idk how to use that and my board does not seem to support iommu as far as i can tell in the bios, but the NIC i need is connected to my pcie x16 slot that goes to the CPU, so may it can be done? but i am not sure if i even need to do that

i am using iperf to check network performance

Host OS: ubuntu server 22.04 64bit (yes i know this is the dev release right now)
Host is running with a A6-3500 cpu and a GA-A55M-DS2 rev 1.1 motherboard, along with 8GB DDR3 1333 9-9-9-24

Guest OS: pfsense 2.6 64bit

My host has 4 NICs available

enp2s0
        onboard Realtek RTL8111/8168/8411
enp1s0f0 (VB NIC1)
        PCIe card (WAN; Right/top side)
        Intel 82571EB/82571GB
enp1s0f1 (VB NIC2)
        PCIe card (LAN; Left/bottom side)
        Intel 82571EB/82571GB
enp3s6
        PCI card, Intel 82541PI
$ lspci
...
01:00.0 Ethernet controller: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller D0/D1 (copper applications) (rev 06)
01:00.1 Ethernet controller: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller D0/D1 (copper applications) (rev 06)
02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06)
03:06.0 Ethernet controller: Intel Corporation 82541PI Gigabit Ethernet Controller (rev 05)

here is the VB’s config

$ VBoxManage showvminfo pfsense
Name:                        pfsense
Groups:                      /
Guest OS:                    FreeBSD (64-bit)
UUID:                        f17b3967-d9c5-462e-b0d3-d11a663d48c2
Config file:                 /home/chad/VirtualBox VMs/pfsense/pfsense.vbox
Snapshot folder:             /home/chad/VirtualBox VMs/pfsense/Snapshots
Log folder:                  /home/chad/VirtualBox VMs/pfsense/Logs
Hardware UUID:               f17b3967-d9c5-462e-b0d3-d11a663d48c2
Memory size:                 4096MB
Page Fusion:                 disabled
VRAM size:                   8MB
CPU exec cap:                100%
HPET:                        disabled
CPUProfile:                  host
Chipset:                     piix3
Firmware:                    BIOS
Number of CPUs:              2
PAE:                         enabled
Long Mode:                   enabled
Triple Fault Reset:          disabled
APIC:                        enabled
X2APIC:                      disabled
Nested VT-x/AMD-V:           enabled
CPUID Portability Level:     0
CPUID overrides:             None
Boot menu mode:              message and menu
Boot Device 1:               DVD
Boot Device 2:               DVD
Boot Device 3:               HardDisk
Boot Device 4:               Not Assigned
ACPI:                        enabled
IOAPIC:                      enabled
BIOS APIC mode:              APIC
Time offset:                 0ms
RTC:                         UTC
Hardware Virtualization:     enabled
Nested Paging:               enabled
Large Pages:                 disabled
VT-x VPID:                   enabled
VT-x Unrestricted Exec.:     enabled
Paravirt. Provider:          KVM
Effective Paravirt. Prov.:   KVM
State:                       running (since 2022-03-12T23:01:38.866000000)
Graphics Controller:         VBoxVGA
Monitor count:               1
3D Acceleration:             disabled
2D Video Acceleration:       disabled
Teleporter Enabled:          disabled
Teleporter Port:             0
Teleporter Address:          
Teleporter Password:         
Tracing Enabled:             disabled
Allow Tracing to Access VM:  disabled
Tracing Configuration:       
Autostart Enabled:           disabled
Autostart Delay:             0
Default Frontend:            
VM process priority:         default
Storage Controller Name (0):            SATA Controller
Storage Controller Type (0):            IntelAhci
Storage Controller Instance Number (0): 0
Storage Controller Max Port Count (0):  30
Storage Controller Port Count (0):      30
Storage Controller Bootable (0):        on
SATA Controller (0, 0): /home/chad/VirtualBox VMs/pfsense/pfsense.vdi (UUID: 7fea4722-e33b-4d7f-ad59-1c55c41466ae)
NIC 1:                       MAC: 08002720924C, Attachment: Bridged Interface 'enp1s0f0', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: allow-all, Bandwidth group: none
NIC 2:                       MAC: 0800274F933B, Attachment: Bridged Interface 'enp1s0f1', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: allow-all, Bandwidth group: none
NIC 3:                       disabled
NIC 4:                       disabled
NIC 5:                       disabled
NIC 6:                       disabled
NIC 7:                       disabled
NIC 8:                       disabled
Pointing Device:             PS/2 Mouse
Keyboard Device:             PS/2 Keyboard
UART 1:                      disabled
UART 2:                      disabled
UART 3:                      disabled
UART 4:                      disabled
LPT 1:                       disabled
LPT 2:                       disabled
Audio:                       disabled
Audio playback:              disabled
Audio capture:               disabled
Clipboard Mode:              disabled
Drag and drop Mode:          disabled
Session name:                headless
Video mode:                  720x400x0 at 0,0 enabled
VRDE:                        enabled (Address 0.0.0.0, Ports 3389, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE port:                   3389
Video redirection:           disabled
VRDE property               : TCP/Ports  = "3389"
VRDE property               : TCP/Address = <not set>
VRDE property               : VideoChannel/Enabled = <not set>
VRDE property               : VideoChannel/Quality = <not set>
VRDE property               : VideoChannel/DownscaleProtection = <not set>
VRDE property               : Client/DisableDisplay = <not set>
VRDE property               : Client/DisableInput = <not set>
VRDE property               : Client/DisableAudio = <not set>
VRDE property               : Client/DisableUSB = <not set>
VRDE property               : Client/DisableClipboard = <not set>
VRDE property               : Client/DisableUpstreamAudio = <not set>
VRDE property               : Client/DisableRDPDR = <not set>
VRDE property               : H3DRedirect/Enabled = <not set>
VRDE property               : Security/Method = <not set>
VRDE property               : Security/ServerCertificate = <not set>
VRDE property               : Security/ServerPrivateKey = <not set>
VRDE property               : Security/CACertificate = <not set>
VRDE property               : Audio/RateCorrectionMode = <not set>
VRDE property               : Audio/LogPath = <not set>
OHCI USB:                    disabled
EHCI USB:                    disabled
xHCI USB:                    disabled

USB Device Filters:

<none>

Available remote USB devices:

<none>

Currently Attached USB Devices:

<none>

Bandwidth groups:  <none>

Shared folders:<none>

VRDE Connection:             not active
Clients so far:              2
Last started:                2022/03/12 23:02:41 UTC
Last ended:                  2022/03/12 23:17:32 UTC
Sent:                        0Bytes
Average speed:               0B/s
Sent total:                  0Bytes
Received:                    0Bytes
Speed:                       0B/s
Received total:              0Bytes

Capturing:                   not active
Capture audio:               not active
Capture screens:             0
Capture file:                /home/chad/VirtualBox VMs/pfsense/pfsense.webm
Capture dimensions:          1024x768
Capture rate:                512kbps
Capture FPS:                 25kbps
Capture options:             

Guest:

Configured memory balloon size: 0MB
OS type:                     FreeBSD_64
Additions run level:         0

Guest Facilities:

No active facilities.

I have installed virtualbox-dkms, virtualbox, and virtualbox-ext-pack on the host

Note that the pfsense wan port is not really WAN, that will not happen till everything is configured

Odd, I’m using a Windows 10 Pro based host along with VMware client also running Windows 10 Pro and I just did some basic testing and not seeing any problems with the virtual NIC.

VMware is bridged to the onboard Intel NIC using the Intel I211 chipset which is a Gigabit LAN connection.

Hopefully somebody with more knowledge on the Linux side can help with this one.

managed to get it better, but still too slow

i found this: Virtualization — VirtIO Driver Support | pfSense Documentation

this got it to about ~500Mbps, but i want full dual link gigabit

How are you measuring this?

by using iperf from a different physical box to the vm
iperf -s on one side and iperf -c 10.0.0.XXX -d on the other

Drop virtual box and use KVM with virtio drivers

1 Like

+1 :slight_smile: I was testing kvm VM network speeds recently, spent some time getting SR-IOV working in a KVM with 20Gbit bonded NICs - got full speed as expected. Decided to try virtio-net just to see if it was that much slower - exactly the same speed and CPU utilization!

well that works much better, still sub par performance and the ping is slow (1ms vs 0.35 and ~75-85% of dual link)

  • ping was just as slow in virtualbox

Maybe I am doing something wrong? i converted my vdi to a qcow2 file if that matters

sudo virt-install --virt-type kvm --name pfsense --ram 4096 --vcpus 3 --disk kvm/images/pfsense.qcow2,bus=virtio,size=250,format=qcow2 --import --network bridge=br0 --network bridge=br1 --graphics vnc,listen=0.0.0.0 --noautoconsole --os-variant=freebsd12.3
$ cat kvm/host-bridge.xml 
<network>
  <name>host-bridge</name>
  <forward mode="bridge"/>
  <bridge name="br0"/>
</network>
<network>
  <name>host-bridge</name>
  <forward mode="bridge"/>
  <bridge name="br1"/>
</network>
$ cat .bash_history | grep host-bridge
nano kvm/host-bridge.xml
virsh net-define kvm/host-bridge.xml
$ cat /etc/sysctl.d/bridge.conf
net.bridge.bridge-nf-call-ip6tables=0
net.bridge.bridge-nf-call-iptables=0
net.bridge.bridge-nf-call-arptables=0
$ cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
  version: 2
  ethernets:
    enp1s0f0:
      dhcp-identifier: mac
      dhcp4: false
      dhcp6: false
    enp1s0f1:
      dhcp-identifier: mac
      dhcp4: false
      dhcp6: false
    enp2s0:
      dhcp-identifier: mac
      dhcp4: true
      dhcp6: false
    enp2s6:
      dhcp-identifier: mac
      dhcp4: false
      dhcp6: false
  bridges:
    br0:
      interfaces: [enp1s0f0]
      dhcp4: no
      dhcp6: no
      parameters:
        stp: true
        forward-delay: 4
    br1:
      interfaces: [enp1s0f1]
      dhcp4: no
      dhcp6: no
      parameters:
        stp: true
        forward-delay: 4
$ ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
# onboard NIC used by host
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 50:e5:49:d9:87:6b brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.126/24 metric 100 brd 10.0.0.255 scope global dynamic enp2s0
       valid_lft 84355sec preferred_lft 84355sec
    inet6 fe80::52e5:49ff:fed9:876b/64 scope link 
       valid_lft forever preferred_lft forever
# Dual Port PCIe card (Guest pfsense WAN)
3: enp1s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel master br0 state DOWN group default qlen 1000
    link/ether 00:15:17:be:13:e4 brd ff:ff:ff:ff:ff:ff
# Dual Port PCIe card (Guest pfsense LAN)
4: enp1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br1 state UP group default qlen 1000
    link/ether 00:15:17:be:13:e5 brd ff:ff:ff:ff:ff:ff
# PCI card; potential use in with a vlan or something
5: enp3s6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:1b:21:c4:fa:08 brd ff:ff:ff:ff:ff:ff
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:e7:a1:b5:c2:ea brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fce7:a1ff:feb5:c2ea/64 scope link 
       valid_lft forever preferred_lft forever
7: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether aa:b7:25:04:b6:f2 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a8b7:25ff:fe04:b6f2/64 scope link 
       valid_lft forever preferred_lft forever
8: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:d1:a5:55 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
17: vnet4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:12:b6:a5 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe12:b6a5/64 scope link 
       valid_lft forever preferred_lft forever
18: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br1 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:37:bc:f9 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe37:bcf9/64 scope link 
       valid_lft forever preferred_lft forever

Give it either 2 or 4 vcpus, 3 is a bit odd…
What is the CPU load in the VM and on the host when you’re testing?
Also, are you routing or packet filtering?

Try disabling hardware offloading on your nic. Some devices do not like the bridged traffic that comes from a VM. I had the same thing with a broadcom 10gbe device last week due to a hardware bug that was corrupting frames.

@MadMatt please tell me that pun was intended, cause i happen to have ODD cpu and i figured using more vcpus than i have cores would be a bad idea
looks like if i just go over the bridged nic host to guest i get full speed

How do i even change kvm guest configs?

Using htop on the host iperf uses 60% of one core at its peak, but using it on the guest pegs this cpu at over 90% on 3 cores (htop on the guest shows around 60/60/98 percent), is there a way i can make that more efficient? i guess i can try to OC this chip or look for a used quad core for it

edit:
Found out how to set the core count, that reserved enough CPU for the host to handle full speed
virsh setvcpus pfsense 2 --config

when i setup the vm i set the size of the 250GB disk to 150GB, how do i fix that?
i did this: qemu-img resize kvm/images/pfsense.qcow2 250G but the guest does not see the space? maybe a partition needs to be expanded?

edit: cpu usage during iperf to guest: https://i.imgur.com/5QICoxt.png

Try the x86_64 / ext4 version of OpenWRT inside - just for reference during troubleshooting, or maybe even regular FreeBSD instead of pfSense, same reason.


As far as VMs and networking go, there’s also a thing called vhost, which allows the host kernel to give a packet to the VM without qemu userland process having to read it from a unix tap socket and relay it into the guest and back.


Another thing you can try and use is macvtap instead of a bridge.

Your host won’t be able to communicate with the VM (by default, that is unless you create a macvlan interface on the host), but overall it’ll reduce some of the bridge processing overhead when your VMs is talking to the outside world.

macvtap sounds like good, i can just use a diff physical port for the host

still one more thing i want to try 1st, but i need to get to work

I think you’re beating a dead horse… the dead horse being pfsense emulated on an insufficiently powerful CPU, that happens to have a tdp of 65 watts… Just dump it and use something newer. Or keep it and try with a Linux firewall solution. Or keep it and disable pf (,if that can be done) on the interfaces you’re routing through…

Some emulated Ossa do not like an odd number of CPUs…also it is not a good idea to allocate all CPUs of an hypervisor to a VM … Especially if then you proceed to max out all of them…

that 65W TDP apples to the igpu also, it only pulls about 45W running prime95 at the 4pin
if i were to go out and pay for new hardware i would look for some 1st gen am4 ryzen hardware, unless i can find some cheap intel hardware that support VT-d (suddenly regret selling my 4690k…)

anyway how do i use macvtap with a netplan? regardless of rather it solves the underlying issue i would still prefer to use that at least for the WAN side

$ cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
  version: 2
  ethernets:
    enp1s0f0:
      dhcp-identifier: mac
      dhcp4: false
      dhcp6: false
      optional: true
    enp1s0f1:
      dhcp-identifier: mac
      dhcp4: false
      dhcp6: false
      optional: true
    enp2s0:
      dhcp-identifier: mac
      dhcp4: true
      dhcp6: false
      optional: true
    enp2s6:
      dhcp-identifier: mac
      dhcp4: false
      dhcp6: false
      optional: true
  bridges:
    br0:
      interfaces: [enp1s0f0]
      dhcp4: no
      dhcp6: no
      optional: true
      parameters:
        stp: true
        forward-delay: 1
    br1:
      interfaces: [enp1s0f1]
      dhcp4: true
      dhcp6: no
      optional: true
      parameters:
        stp: true
        forward-delay: 1

if i can get it close i could get a better CPU for this old motherboard instead of going out and getting new stuff, the host has no issues and has the cpu power to handle more than one dual link iperf run at a time, but running it in the vm kill it, but oddly just running it from host to vm over the bridge bypassing the physical card works perfectly

I don’t know about netplan.

Normally I uninstall whatever random distro specific network setup helpers come out of the box (e.g. network-manager) and use networkctl and systemd-networkd . That’s just because it’s what I’m used to and what I used before netplan existed and I rarely install servers (yay containers), so i never could justify learning netplan to myself.

Should I learn it, is there value in spending 30-60 minutes on a VM playing with it?

There’s example 9 with macvtap in the manpage . Also, checkout the
systemd.netdev

I think i got it, i did not need to do anything in netplan at all, just remove my bridge stuff

[email protected]:~$ cat kvm/host-macvtap0.xml 
<network>
  <name>macvtap-net0</name>
  <forward mode="bridge">
    <interface dev="enp1s0f0"/>
  </forward>
</network>
[email protected]:~$ cat kvm/host-macvtap1.xml 
<network>
  <name>macvtap-net1</name>
  <forward mode="bridge">
    <interface dev="enp1s0f1"/>
  </forward>
</network>
[email protected]:~$ sudo virsh net-list --all 
 Name           State    Autostart   Persistent
-------------------------------------------------
 macvtap-net0   active   yes         yes
 macvtap-net1   active   yes         yes

[email protected]:~$ ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 50:e5:49:d9:87:6b brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.126/24 metric 100 brd 10.0.0.255 scope global dynamic enp2s0
       valid_lft 84406sec preferred_lft 84406sec
    inet6 fe80::52e5:49ff:fed9:876b/64 scope link 
       valid_lft forever preferred_lft forever
3: enp1s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether 00:15:17:be:13:e4 brd ff:ff:ff:ff:ff:ff
4: enp1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:15:17:be:13:e5 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::215:17ff:febe:13e5/64 scope link 
       valid_lft forever preferred_lft forever
5: enp3s6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:1b:21:c4:fa:08 brd ff:ff:ff:ff:ff:ff
10: [email protected]: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 500
    link/ether 52:54:00:83:ed:30 brd ff:ff:ff:ff:ff:ff
11: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500
    link/ether 52:54:00:bf:32:7a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:febf:327a/64 scope link 
       valid_lft forever preferred_lft forever

this lowered the cpu usage a bit, but performance did not improve, the guest maxes 1 core and places the other core at around 60% (if i give it 3 the 3ed also gets ~60)
it looks like when i run iperf now one core gets maxed out for the just and another for the vm

guess i should check how much of a difference a OC makes on this CPU
ping delta

[email protected]ng-Plus:~$ ping -c 4 10.0.0.126
PING 10.0.0.126 (10.0.0.126) 56(84) bytes of data.
64 bytes from 10.0.0.126: icmp_seq=1 ttl=64 time=0.365 ms
64 bytes from 10.0.0.126: icmp_seq=2 ttl=64 time=0.337 ms
64 bytes from 10.0.0.126: icmp_seq=3 ttl=64 time=0.339 ms
64 bytes from 10.0.0.126: icmp_seq=4 ttl=64 time=0.336 ms

--- 10.0.0.126 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3082ms
rtt min/avg/max/mdev = 0.336/0.344/0.365/0.012 ms
[email protected]:~$ ping -c 4 10.0.0.3
PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.874 ms
64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=0.777 ms
64 bytes from 10.0.0.3: icmp_seq=3 ttl=64 time=0.779 ms
64 bytes from 10.0.0.3: icmp_seq=4 ttl=64 time=0.783 ms

--- 10.0.0.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3070ms
rtt min/avg/max/mdev = 0.777/0.803/0.874/0.040 ms

iperf deltas

[email protected]:~$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  1] local 10.0.0.50 port 5001 connected with 10.0.0.3 port 24378
------------------------------------------------------------
Client connecting to 10.0.0.3, TCP port 5001
TCP window size:  442 KByte (default)
------------------------------------------------------------
[ *2] local 10.0.0.50 port 50712 connected with 10.0.0.3 port 5001 (reverse)
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-10.0178 sec   969 MBytes   811 Mbits/sec
[ *2] 0.0000-10.0328 sec   965 MBytes   807 Mbits/sec
[SUM] 0.0000-10.0178 sec  1.89 GBytes  1.62 Gbits/sec
[  3] local 10.0.0.50 port 5001 connected with 10.0.0.126 port 43800
------------------------------------------------------------
Client connecting to 10.0.0.126, TCP port 5001
TCP window size: 1.20 MByte (default)
------------------------------------------------------------
[ *4] local 10.0.0.50 port 32954 connected with 10.0.0.126 port 5001 (reverse)
[ ID] Interval       Transfer     Bandwidth
[  3] 0.0000-10.0312 sec  1017 MBytes   850 Mbits/sec
[ *4] 0.0000-10.0455 sec  1.09 GBytes   935 Mbits/sec
[SUM] 0.0000-10.0312 sec  2.09 GBytes  1.79 Gbits/sec

overall that is close to native speed on the vm, though i would like better ping, but if that is the price for just using a vm so be it

since i have no idea how to do this correctly… well this works (i assume there is a better way)

$ virsh undefine pfsense
$ virt-install --virt-type kvm --name pfsense --ram 4096 --vcpus 2 --disk kvm/images/pfsense.qcow2,bus=virtio,size=250,format=qcow2 --import --network network:macvtap-net0,model=virtio --network network:macvtap-net1,model=virtio --graphics vnc,listen=0.0.0.0 --noautoconsole --os-variant=freebsd12.3