Building an arm64 OpenBSD router using RockPro64

Awesome, but instead of USB 3 ports, I’d like 4 Ethernet ports, or a single 10G port (that can deliver at least 5Gbps, doesn’t need the full 10 Gbps speeds). But I’ll keep Odroids in mind.

Take a look at the VIM4 that is scheduled to release with an update to the Amlogic 311D. Supposed to have PCIe3 port that is independent of USB3. That means that there will be an Odroid-N that will most likely ship with that in the near future.

I may get one. I skipped the three because you could only use PCIe/NVMe or USB3 but not both and there as only one USB3. I liked the Khadas concept that it had an NPU, something that I wish the next Odroid has because I used my N2 for HomeAssistant and would like to do some computer vision stuff with security cameras.

1 Like

Cool board, but I doubt a single lane of PCIe (gen2) will be able handle a lot of traffic (also, clucky m.2 to pcie adapters).

Unless I use USB 3 to Eth NICs, expansion on most SBCs is pretty much non-existent (aside from camera connectors and GPIO). The RockPro64 managed to bring some serious expansion capabilities to the SBC space, but the software support is still lacking (at least on the OpenBSD side, on Linux most things should work).

Dang it, that Khada VIM4 looks like the perfect replacement for my Pi 4 as my main PC.

Yeah, I wish there were more board with native PCIe without having to get a HAT or attachment board. My favorite SBCs do come from Odroid though as their boards seem to be designed with industrial, air cooled, settings in mind. Right now, you best best is to hope that there is a RPi4 compute module main board that just focuses on networking.

I wish RockChip would get their stuff together. The RK stuff is on everything and the Linux support is great but the BSD stuff could be better. Have you looked into seeing if the FreeBSD Linux compatibility layer supports the driver for the cards that you are looking for?

1 Like

I think the FreeBSD support might be better, but I want OpenBSD, otherwise, I’d just be running Linux on it and call it a day.

I prefer passively cooled boards, which is why I stuck with RPis until now (also, lots of accessories on the market). I think there are plans for CM4 routers, but I wonder how fast would OpenBSD come to those, considering the RockPro64 is a few years old now (almost 3 yo).

1 Like

giphy1

I did it!

Okay it was sort of dumb luck and I’m not exactly sure what fixed everything but now I’m able to communicate with VLANs on my OpenBSD router! In order to get this to work I did a few things:

  1. I recompiled and flashed the latest UBoot. I know the OpenBSD miniroot and install images come with UBoot, but for whatever reason that never worked for me so I just compiled and flashed myself.
  2. I updated to OpenBSD 7.0.
  3. I swapped the LAN and WAN ports so now the LAN is using the built-in dwge(4) port and the WAN is using the rge(4) port.

These changes happened over the course of about a month and I found the VLANs working yesterday by chance while reviewing the OpenBSD source code.

Yay!

A note of caution to anyone trying this themselves, the built-in port doesn’t support some features:

$ ifconfig dwge0 hwfeatures
dwge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        hwfeatures=10<VLAN_MTU> hardmtu 1500
        lladdr de:ad:be:ef
        index 2 priority 0 llprio 3
        media: Ethernet autoselect (1000baseT full-duplex)
        status: active
        inet X.X.X.X netmask 0xXXXXXXXX broadcast X.X.X.X

No VLAN hardware tagging, no jumbo frames, no wake-on-LAN, and no hardware CSUM. Keep this in mind when designing your homelab.

I’ll return with more once I’ve created something worth discussing.

2 Likes

Awesome! I don’t mind the built-in port not having VLAN tagging, as it might be used for WAN or not used at all, who knows, but I am hoping to use VLAN on the PCI-E expansion card I will be using.

Now that I think about it, if I go with a 4 port card, I might not need to use VLANs, but it would be a nice feature if the router supported that (I’m still hoping to get a multi-gigabit NIC, rather than 4x 1Gbps* NICs).

Edit: typo

1 Like

Yeah if you can get a mature Intel NIC attached to it, I would only use that.

Also, my 2c, run your WAN through your switch on a dedicated VLAN so you can set up a virtualized CARP peer later.

2 Likes

I never combine the LAN and WAN ports anyway. I always have a dedicated WAN port.

I like the idea of having a separate port for CARP, either trough the switch, or rather if I can help it, direct connection between 2 routers. That being said, I never used CARP, but I looked into it before while I was trying to design a good WAN connection, but I didn’t have the chance to apply anything I looked into.

1 Like

I think redundancy becomes a bigger issue when you don’t have a configure commit save paradigm like you would on a conventional router or something like VyOS. It’s easier to fat finger a config and lock yourself out. One of the few inherent drawbacks to using OpenBSD as a gateway.

1 Like

Before going out and buying the RockPro64 and an Intel NIC note that I have only gotten this particular rge(4) NIC to work. I have tried the following Intel NICs all resulting in a kernel panic on boot:

Intel 82576: https://www.amazon.com/gp/product/B01LXTF48X
Intel I340-T4: https://www.amazon.com/gp/product/B003A7LKOU
I350-T2: https://www.amazon.com/gp/product/B01N1XX11W

I also tried this 4-port Realtek card and the driver wouldn’t properly load despite it being visible in pcidump: https://www.amazon.com/gp/product/B01HH6WETO

My current hypothesis is the RockPro64 PCI-E slot can’t provide enough power to run more than one port, but that hypothesis can be easily debunked if someone else is able to get a card working with two or more ports.

2 Likes

There was something about the PCIe controller under Linux that openbsd got right on first try and older Linux kernels had issue with… in addition to power. (I was reading about someone trying 10Gbps nics)

1 Like

If it runs the normal PCI-E spec, it should provide enough juice. I believe people have used way hungrier SAS cards on it and it was fine.

10 Gbps cards tend to require a lot more, so do some really high-end HBA and RAID cards.

For now, I am looking for an ARM SBC that has WiFi 5 (802.11ac) or 6 (802.11ax) (to use as WAN) and an ethernet port (for LAN), with the only decent one being the RPi 4, but which is either out of stock or overpriced. But I’m still holding on and intend to do an OpenBSD router, I’m still hoping for a RockPro64, but if a cheap and decent RPi CM4 board shows up with a few 2.5 Gbps Ethernet ports that support VLAN, I may change my mind.

I still want an OpenBSD ARM router, but it will take a while before I’ll be able to really start hoarding computers and make a home lab.

Does Belkin RT3200 / Linksys E8450 count?

It’s not very fast or featureful to be used for a general purpose SBC, but does a decent job routing, and it does have a usb-a port you can use for extra storage. Sometimes it goes for $100.

I was hoping for a more “traditional” SBC like Pi 4 or RockPro64 or Odroid stuff, because I was hoping to run either a full linux distro on it (not OpenWRT) or maybe OpenBSD if there’s support for it.

I’m currently running Alpine on a Pi 3 as my main router and unfortunately I can only push 30-40 Mbps on the (wireless) WAN, while I am having a 250-300 Mbps Internet. So I’m not getting the full benefit of what I’m paying for and I am thinking to change that.

OpenWRT is mostly just a kernel with patches to let network stuff work well + minimal musl based userland to allow not having to have a full fat systemd+glibc in order to configure said networking from a webui, so (because) things would squeeze onto boxes with 8M flash that some routers have.

Folks routinely run full fat openssh/openssl and docker containers on OpenWRT, … and Debian chroots.


So… Pi 3b+ has a single usb2.0 hub and a gigabit ethernet port hanging off of it, if you used another USB dongle for another rj45 wan port you should be able to pull off 300-400 in aggregate (480Mbps being the USB 2.0 limit - some overhead for USB 2.0 framing and stuff)

How are you connecting your phone to the Pi?

And why are you using wifi for wan?


speaking of dongles the $15 TP-Link ue300 is a USB3.0 dongle with rtl8152 chip that just happens to have ok drivers. Folks routinely use that one in particular with pi4 where it let’s one do gigabit symmetrically simultaneously with NAT and traffic shaping at 15% CPU (granted, those are A72 cores but some other drivers e.g. for ASIX chips are a lot less efficient and use a lot more CPU)


@ThatGuyB Which country are you in? I have an interest in SBCs, i might be able to look around online and dig up some vendors with some interesting SBC in stock.

I’m personally a fan of Odroid N2+ ; they’re severe overkill for a gigabit (multihundred megabit) router only use, but you don’t have to use them only for that :slight_smile: you get emmc and gigabit ethernet and an independent usb3.0 set of ports.

1 Like

The reason why I want a somewhat fatter distro is so that I have an easier access to stuff. I know you could run many stuff on OpenWRT, but I have no idea how easy that would be. I’ve used OpenWRT before on 2 old TP-Link routers (I had issues with PPPoE disconnecting or taking forever to connect though, likely because of my ISP, so I scraped that and used what they provided). Using already implemented stuff, like WireGuard, was easy, but I didn’t figure how I would be running things like Docker, or rather, node_exporter (prometheus agent) if I wanted to (at that time, I didn’t need to, so I never looked too deep into it).

It’s just a home WiFi, phone uses 802.11ac, the Pi 3 obviously only uses 802.11n. I’ve used iperf3 for the tests (you can go to the original comment that I quoted to check the details). Phone was running it inside Termux.

Because I have devices without WiFi (or in my case, that have the WiFi signal blocked by being in an entirely aluminum case), so I can only use Ethernet and I cannot easily (and neatly) drag an Ethernet cable from the ISP router to the room where I have my equipment. I’m also using the Pi 3 as a VPN gateway, so I’m grabbing Internet from WiFi, creating a tunnel and redirecting all traffic coming from the Ethernet port through the VPN tunnel. And I could use such a setup on-the-go with public WiFi or a hotel WiFi if I wanted to, which is pretty neat (although I probably won’t have that opportunity for a long time).

USA, I’m currently in Florida.

That would be interesting, but not just right now. I do intend to make an ARM SBC cluster once I move to my own place, so we could collaborate on projects if you want.

I really like the RockPro64 for its PCI-E gen2 4x slot, it makes adding stuff to it really easy. It only has 2x A72 cores (and 4 A53 cores), but it is more than plenty for a lot of setups. I am interested in Odroid-HC4 for a backup solution and maybe a 24/7 on NAS (obviously, 2 of them, one NAS and one backup box).

I am reviving this topic. I need @risk’s, @Mastic_Warrior’s and @oO.o’s opinions.

Ok, I am planning for my home lab now. I still have really weird ideas, so who knows if I will come to a conclusion after this brainstorming. But, I just need someone to talk with, not a lot of people in my circle have experience with SBCs and networking gear.

Overall plan is to start small and expand. I want to make a redundant network using SBCs. My ideas have a lot of dependencies attached to them, depending on what router I start with, I need to decide what switch I buy, so basically nothing is planned.

My router has to run OpenBSD, so at least this limits my choices, which in this case is a good thing, because there are a lot of SBCs to choose from.

The only boards that are officially supported are RockPro64, Odroid N2/+, Rock Pi N10 and NanoPC T4. It could potentially run on NanoPi R4S and Rock Pi 4 model B which also run RK3399, but may need a patch from OpenBSD because they may have different device trees, so it’s a bit risky.

Out of all of them, I have a few janky options, not that any would be not-jank:

1) 4x 1 Gbps PCI-E Intel card | RockPro64 | switch with 1 Gbps ports

2) 2x 2.5 Gbps PCI-E Intel card | RockPro64 | switch with minimum 4x 2.5 Gbps ports

3) Odroid H2 Net Card = 4x 2.5 Gbps M.2 card Realtek r8125 | RockPro64 + PCI-E to M.2 adapter / Rock Pi N10 / NanoPC T4 | switch with minimum 8x 2.5 Gbps ports

The more I think about it, the more lost I get. Option 3) is kinda risky, in the sense that for one, it’s realtek, and for second, the Odroid H2 Net Card could not be compatible with other SBCs without some electrical tape, if you know what I mean. Besides, I want to make use of every port on the board if I can.

To make matters even worse, I won’t get internet via an ethernet cable necessarily, I may need to connect to WiFi and give it to all the other devices, which is driving me nuts. This could be solved by either adding a USB wifi card, or a M.2 wifi card if I get a board like the Rock Pi N10. Or worse-case scenario, using a Pi on a dedicated VLAN and bridging wifi to eth so I don’t triple NAT.

Hear me out, I want the final build to have 2 routers, each connected to 2 managed, non-stackable switches, and the routers connected straight to each other for CARP. So I need at least 3 ports on each. Given that I will have a bit of inter-vlan traffic, I’d like 1 port to be gigabit and 2 ports to be 2.5 Gbps.

Now comes the switch. So, I need 2x 2.5 Gbps ports (on each switch) to connect each router, about 3x Odroid N2+ (on each switch), 1x Odroid HC4 (on each switch), 1x RPi 2, 1x RPi 3, 3x APs and potentially 2x RPi 4 and a PC. So that would be around 18x 1 Gbps ports and 2x 2.5 Gbps ports. Should I just buy 2 switches with 12-16x 2.5 Gbps ports and call it a day? I could technically split it into 2x 12 ports switches, so that gives me some options.


Ok, let me give a summary up to here:

Final build: 2 routers, each with 2x 2.5 Gbps ports, 1 port per switch, and connected to each other on the 1 Gbps integrated port. On each switch, 4x SBCs with 1 Gbps ports.

So the least I need for my redundant cluster is 7 eth ports (1 is trunk between switches). But I want room to expand in the future. So I need an additional 4 ports for each switch.

Final 2 switches need at least 11 ports, 3 (or 4) of which need to be 2.5 Gbps ports. Given my ultimate goal, I guess I should start looking for 12 to 16 ports switches. Any recommendations? I need them to be accessible via SSH and automated (I don’t mind automating through shell and running their own OS commands from a SBC), as in, if port down, switch vlans on other port and such. I don’t care for web GUI or brand, but I absolutely do not want to register an account to use them, or worse, needing to have the switches connect to someone else’s computers to function (cloud activation).

Thank you for listening guys, putting my thoughts down on paper in a comment made me clear my thoughts a little. It’s not so bad after all. It’s actually really doable now that I think clearly.

Next, the infrastructure itself. On the Ordoids I want to run Void or Alpine. From what I read online, running the Linux kernel 5.9 or higher should not require much tinkering to get them working, because Amlogic is cool like that. I just need to use U-boot and probably boot USB drives on HC4 and netboot the N2+ from the HC4 NFS or iSCSI. If that’s not possible, do you think I could extract the kernel from the official images and slap them into a Void or Alpine image? Last resort, I’ll just use the default images and run my LXD containers like that.

The project will first start with 1 router, 1 switch and 5 SBCs (I own 3 already) and expand from there, so I’ll double the router and switch.


What are your thoughts? These are my requirements. Opinions on the SBCs chosen? Any specific network cards you think may fit my needs? I would like to try MikroTik or fs.com, but I’m open to anything, so long as they don’t have loud-ass fans and I can automate them via ssh commands (non-interactive shell). Any good USB 3.0 WiFi 5 (ac) cards that work on Linux out of the box?

Any criticism towards the speeds chosen? Do you think 1 Gbps pipes to everything would be fine, given each of the 6 N2pluses will have their roots mounted on the HC4s and inside them, LXD containers having their disks mounted separately to the HC4s too? I may have around 20 or 30 LXD containers and maybe some k8s running inside some of them.

Note: the host OSes will be in a management VLAN, the storage connections on a storage VLAN, with the containers having their own VLAN used to serve services.

1 Like

Why? Not questioning that’s the case, just want more details.

1 Like