Hi, all. I’m curious on if this CPU will be able to route 40Gbps of traffic. The CPU is 16core, Intel Xeon Gold 6130-2.1GHz. Is this a good enough CPU for that amount of bandwidth if I have a Intel XL710 40G NIC?
According to this paper:
https://www.google.com/url?sa=t&source=web&rct=j&url=https://lfnetworking.org/wp-content/uploads/sites/7/2022/06/benchmarking_sw_data_planes_skx_bdx_mar07_2019.pdf&ved=2ahUKEwigno3k9vL9AhXOwKQKHVmcAqIQFnoECBAQAQ&usg=AOvVaw3L6vxCrbyxbgGZ4YnEqeK_
The 24 core version of that chip is rated at 15Mpps so, with the usual caveat on Gb/s versus packet size I’d say no, not even routing only…
The real question is how you plan on handling that 40GB traffic. pfSense can’t handle that, you’d need pfSense+ or TNSR and that’s $$$ ISP level gear.
Dialing it back to 10GB gives you a lot more cheaper options unless you are looking at a commercial type setup. Your post was unclear. It all depends on your budget. A EPYC CPU with the fastest RAM you can get, plus NVMe drives will handle that load, but to what end? Need more detail on the use case.
Can pfsense handle 10Gb?
Look at:
and also:
Never done it myself, but there are docs out there that seem to say it is possible with the right hardware.
15Mpps (15000kpps) at 1.5kb per Packet would be 22.5Gbs (I think).
Cheapest way to handle that is probably Mikrotik CR2216
After 10Gb/s, shit be expensive yo
That’s why I have a 10GB network at home. I wanted 25-40GB. After looking at the price tags, I scaled back quickly.
pfsense is built on FreeBSD, and while it isnt exactly the same it should be somewhat close. Netflix had a data throughput problem once upon a time, back when they were upgrading their servers to the new (at the time) 100gb NICs that had just released. They upgraded their servers and saw hardly any throughput increase, and so worked directly with FreeBSD team to better optimize the network stack performance. They got it so that a bone stock FreeBSD install can push roughly 100gb throughput with no tweaks (assuming hardware is up to it).
cards alone, 40Gb is pretty affordable, under $100/c for mellanox SFP server castoffs. cables are reasonable too.
what’s expensive is switching capacity, either through a very expensive dedicated hardware switch, or spending big on bleeding edge X86 CPUs to comfortably route 40+Gb/s of packets.
The cards are barely more than 10Gbe cards, though, so it might be worth it to just get the spicy cards and just route “as many packets as the CPU can manage”.
But that was network streaming performance , not routing/switching performance, and surely not packet filtering performance…
As others have said routing and even more packet filtering at >10gbit/s rate is still in the realm of very expensive dedicated hardware.
Mikrotik is getting there for the switching, but their only 100gbit router is 3kusd and can push 200gbit in optical conditions that drop to <40gbit as soon as you add a routing filter …and it sucks >100w to do that…
As others have pointed out, why would you even want (i understand the homelabber need) to route/packet filter at these speeds?
This. Couldn’t justify it in a home lab environment. I’m subject to financial input from Wrongfully Induced Financial Enforcement, or W.I.F.E. for short.
You can often find switches at decent prices on Ebay. The Mellanox SX1024 I have can be had for $500 on ebay now days. Has plenty of ports for a home and after bootup it is quiet enough when kept in a closet that you dont hear it from more than 2 ft away with the door closed.
What in the world is that tracking link you posted. JEEZ m8. Clean it up a bit.
Dude, chill, it’s a Google search link and yes they are tracking, but your isp is tracking, your DNS provider is tracking and your neighbour tracks sometimes …
I see fan units at that price, but not the switch it’s self. I do see some similar options in that price range, though, and even some tons-of-ports 10gbe switches with a couple 40gbe ports. I guess 40gbe in the home is actually relatively affordable now.
the Ruckus-Brocades ICX-66xx range of switches (200-300USD very used) have 40GB ports, can do 40GBit line rate switching and routing, they can even do AES256 encryption of routed traffic through the stack ports … 100-200W of consumption and no packet filtering
The OP was asking about using a xeon processor to do the same, and he never really touched on the NAT/packet filtering needs…
There’s L3 switching/forwarding, there’s basic encap/decap, there’s basic connection tracking and filtering like ipsets, and there’s surricata filtering and TLS splicing.
Obviously, the xeon won’t do the last thing. I’m pretty sure the first thing would be ok with some basic tuning (e.g. there’s some custom iptables modules developed for OpenWRT to allow 400MHz MIPS chips to forward gigabit through the kernel, mostly a mix of ebpf/xdp stuff floating around that can help).
Stuff in tbe middle - not sure, it be interesting to see.
Linux is a different beast relative to pfSense… and then, maybe you only need 40G on 9k/16k/64k frames (i.e. not imix, but storage) where you need <1M PPS in this last case.