Any love for FreeBSD? -- A short story from using to contributing

MWL books are awesome! I have copies of FreeBSD Jails, SSH & ZFS mastery (signed by Allan Jude). I’ve been meaning to pick-up SNMP mastery but haven’t been to any BSD conferences because of the thing. Hopefully that changes in May when BSDCan 2023 happens in Ottawa.

1 Like

The specific issue since 2012: Virtualization — VirtIO Driver Support | pfSense Documentation

Windows also turns off hardware checksum offload by default due to problems with various cards, and the fact that for the past decade or so CPUs have been fast enough to make it irrelevant.

You’d think that, I thought that too, but there’s just so much context switching it’s actually horribly slow. Low power and fanless CPUs like the J4125 (arguably a decade old architecture) can only do about 200-ish mbps (a bit north of 300 if you disable pf) and latencies go horrible. Not a problem on Linux (2.5Gbps + CPU to spare). It works if you can sr-iov or otherwise pass through the whole card - which is practically cheating, and the problem is still there on newer hardware too, it’s faster, but comparatively very slow.

Anyway, I don’t really care, I’ve made up my mind for now, and FreeBSD is a wonderful historical project / oddity to me, and I really don’t want to throw more shade.

I’d like to hear more about why @diizzy and folks still like using it, what are the unique and quirky things and features that make it truly stand out in a good way… did I mention SIGINFO ?

Other reasons MS have disabled TCP offload

basically (from tfa)

  • If there is a flaw in the TCP implementation in the network card, for example a security issue, you are potentially looking at a firmware update on the network card, or worse, you may be stuck with it, depending on the card implementation and the vendor’s support policy for the network card.
  • Under heavy network loads, performance may actually drop, because you’re limited to the resources on the network card, which may be a bottleneck relative to a fast operating system stack with fast, available processor cores.
  • The cost for all TCP connection offloading is fixed; there’s no way for the operating system to optimize specific use cases. The feature assumes that the fixed cost will be offset by the CPU savings, thus there will be an overall improvement in performance. However, improvements in processor performance combined with what real-life TCP workloads look like suggest that, in 2017, ~99% of real-life connections will send enough data for the performance arithmetic to work out.
  • The NIC code wasn’t necessarily written with TCP in mind; thus, not all TCP features are implemented. For example, the TCP performance enhancement known as Selective Acknowledgement ( RFC 2018 , from 1996 [!!]), can’t be used with TCP Chimney.

All sound reasons.

I have no issues with performance at all, in fact I have no issues pushing linespeed off a RockPro64 (RK3399) with a dual port Intel NIC on FreeBSD 13.1 so I have no idea what’s going on without futher information but then again, I don’t use pfsense.

@thro
While it’s true it can also work just fine, but it certainly can be worth disabling it along with TSO (TCP Segmentation Offload) and LRO (TCP large receive offload) if you’re experiencing issues.

1 Like

It’s reproducible on vanilla FreeBSD guests as well as pfSense, give virtio a try.

I only have one VM using virtio and I can’t reproduce it on that one. All other are either using Hyper-V or bare metal, sorry.

Have to concur here. I have FreeBSD 13.1 running using VirtIO, different VLANs, etc. on a Proxmox host. Each of the VMs can ping/access services on them. Improvements were made to the virtio driver in 13.x. I believe pfSense is still based upon FreeBSD 12.3.

1 Like

What do you use for creating the traffic?

I have pfSense 2.6.0-RELEASE (FreeBSD 12.3-STABLE) running on Proxmox VE 7.3-3 with multiple VirtIO NICs and two vCPUs (Intel Core i3-10100).

When running iperf3 between two FreeBSD VMs on different subnets/VLANs, I get 2.61 Gbps on average with peaks over 2.83 Gbps.

When running iperf3 between two FreeBSD VMs on the same subnet/VLAN, I get 4.36 Gbps on average with peaks over 4.64 Gbps.

Both VMs have one vCPU (also Intel Core i3-10100) and a single VirtIO NIC.
The Hardware isn’t really comparable, but I could of course test this with two of my HP T620 Thin Clients.

I’ll try again these coming days and compare.

Two bridges on Linux host, two virtio nics into guest, guest with basic forwarding and nat. (Enable routing forward tcp port 5201).

iperf3 -s and iperf3 -c ip-on-vm -P4 on host.

I don’t know what to expect from a relatively modern i3-10100. Definitely more than 10Gbps - since there’s no physical interfaces involved.


@guru4gpu could I hassle you to spin up an OpenWRT VMs and try the same iperf3 to compare, and see what you get with it.

2 Likes

Here are my results:

Client VMs: 1 vCPU, 1 GiB RAM
OpenWRT: 4 vCPU. 256 MiB RAM

Debian 11 - bridge - Debian 11:

  • single stream: 24.3 Gbps
  • 4 parallel streams: 21.5 Gbps

FreeBSD 13.1 - bridge - FreeBSD 13.1:

  • single stream: 27.4 Gbps
  • 4 parallel streams: 21.8 Gbps

Debian 11 - OpenWRT - Debian 11:

  • single stream: 17.4 Gbps
  • 4 parallel streams: 15.7 Gbps

FreeBSD 13.1 - OpenWRT - FreeBSD 13.1:

  • single stream: 17.8 Gbps
  • 4 parallel streams: 16.4 Gbps

I have no Idea why my first iperf3 test with FreeBSD was 6.3x slower, will have to look into that, but I can remember doing such a test with similar results a few months back. Back then OpenBSD only had about 300-400 MByte/s with VirtIO if I can remember correctly, will test that also later.

Edit 1

I reran the tests from yesterday with two FreeBSD 13.1 VMs I use for hosting Services in Jail. The results were the same, I suppose that VNET has something to do with the terrible speeds.

Edit 2

Just for shits and giggles, I ran iperf3 on localhost, this is what I’ve got:

Debian 11 localhost:

  • single stream: 41.2 Gbps
  • 4 parallel streams: 36.3 Gbps

FreeBSD 13.1 localhost:

  • single stream: 50.1 Gbps
  • 4 parallel streams: 41.1 Gbps

Still feels kinda weird, as from my experience Linux is usually faster in such benchmarks…

Edit 3

Just restored the two OpenBSD VMs I used a while back:

Both 1 vCPU, 1 GiB RAM

OpenBSD 7.2 - bridge - OpenBSD 7.2:

  • single stream: 3.20 Gbps
  • 4 parallel streams: 3.34 Gbps

OpenBSD 7.2 localhost:

  • single stream: 12.9 Gbps
  • 4 parallel streams: 12.8 Gbps

Interestingly, OpenBSD 7.2 seems to be the only OS where parallel streams improve the performance. But like previously stated, the VirtIO performance isn’t great at all.

Edit 4

Corrected some spelling mistakes

Edit 5

Installed OPNsense 22.7.10 to test if the VirtIO driver in FreeBSD 13.1 really is faster than the one in FreeBSD 12.3 (pfSense 2.6), here are my results:

OPNsense: 4 vCPU. 1024 MiB RAM

Debian 11 - OPNsense - Debian 11:

  • single stream: 3.78 Gbps
  • 4 parallel streams: 3.82 Gbps

FreeBSD 13.1 - OPNsense - FreeBSD 13.1:

  • single stream: 3.17 Gbps
  • 4 parallel streams: 3.13 Gbps

I ran the same test on pfSense 2.6.0, this time with 4 vCPU’s as well:

Debian 11 - pfSense - Debian 11:

  • single stream: 3.74 Gbps
  • 4 parallel streams: 3.71 Gbps

FreeBSD 13.1 - pfsense - FreeBSD 13.1:

  • single stream: 2.79 Gbps
  • 4 parallel streams: 2.91 Gbps

The difference isn’t really that big, but compared to the speed of what OpenWRT do, the pf based firewalls really don’t come anywhere close.
For me this isn’t of any concert, since I won’t need more than 1 Gbps, but if one wants to play with interfaces faster than 2.5 Gbps, Linux Firewalls seem to be the way to go on Proxmox VE.

3 Likes