I’d keep the router and VPN as a separate box. It doesn’t need to be expensive. I’m using a rockpro64 as my main router and previously I’ve been using a raspberry pi 3. The only reason I upgraded was to have gigabit inter-vlan routing. You won’t be running OPNsense (at best OpenWRT / FreeBSD, or at worst some bare Linux distro).
However, if you really want to go the forbidden router variant, you can get the odroid h4 ultra with a type-3 case (the one with support for 4x sata SSDs and a m.2 4x gigabit NIC). Run proxmox on it and pass through the m.2 NIC to an OPNsense VM.
Since people might be wondering, here's the lspci output for the h4 base model
Note: the h4 ultra should have similar output, with maybe the sata controller in a shared space, not allowing for the whole controller passthrough, although one should be able to pass individual disks, if needed - I advise against that generally, just make a zpool on proxmox).
# doas lspci
00:00.0 Host bridge: Intel Corporation Device 4678
00:02.0 VGA compatible controller: Intel Corporation Alder Lake-N [UHD Graphics]
00:08.0 System peripheral: Intel Corporation Device 467e
00:0a.0 Signal processing controller: Intel Corporation Platform Monitoring Technology (rev 01)
00:14.0 USB controller: Intel Corporation Alder Lake-N PCH USB 3.2 xHCI Host Controller
00:14.2 RAM memory: Intel Corporation Alder Lake-N PCH Shared SRAM
00:15.0 Serial bus controller: Intel Corporation Device 54e8
00:15.1 Serial bus controller: Intel Corporation Device 54e9
00:16.0 Communication controller: Intel Corporation Alder Lake-N PCH HECI Controller
00:1a.0 SD Host controller: Intel Corporation Device 54c4
00:1c.0 PCI bridge: Intel Corporation Device 54ba
00:1c.3 PCI bridge: Intel Corporation Device 54bb
00:1c.6 PCI bridge: Intel Corporation Device 54be
00:1d.0 PCI bridge: Intel Corporation Alder Lake-N PCI Express Root Port #9
00:1e.0 Communication controller: Intel Corporation Alder Lake-N Serial IO UART Host Controller
00:1f.0 ISA bridge: Intel Corporation Alder Lake-N PCH eSPI Controller
00:1f.3 Audio device: Intel Corporation Alder Lake-N PCH High Definition Audio Controller
00:1f.4 SMBus: Intel Corporation Alder Lake-N SMBus
00:1f.5 Serial bus controller: Intel Corporation Alder Lake-N SPI (flash) Controller
02:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-V (rev 04)
04:00.0 Non-Volatile memory controller: Micron Technology Inc 2550 NVMe SSD (DRAM-less) (rev 01)
With the m.2 slot in its own iommu group, you should be able to passthrough. Same for the ethernet controller (and the h4+ and ultra come with 2 NICs, so you should technically be able to pass these to OPNsense and make a virtual NIC for proxmox, but you’d need a USB NIC temporarily to connect to proxmox to configure it, then have the router VM power on automatically, so you can access it - it’s jank and I hate it, but it’s doable).
# doas lspci -nnk
00:00.0 Host bridge [0600]: Intel Corporation Device [8086:4678]
DeviceName: Onboard - Other
Subsystem: Intel Corporation Device [8086:7270]
00:02.0 VGA compatible controller [0300]: Intel Corporation Alder Lake-N [UHD Graphics] [8086:46d1]
DeviceName: Onboard - Video
Subsystem: Intel Corporation Device [8086:7270]
Kernel driver in use: i915
Kernel modules: i915
00:08.0 System peripheral [0880]: Intel Corporation Device [8086:467e]
DeviceName: Onboard - Other
Subsystem: Intel Corporation Device [8086:7270]
00:0a.0 Signal processing controller [1180]: Intel Corporation Platform Monitoring Technology [8086:467d] (rev 01)
DeviceName: Onboard - Other
Subsystem: Intel Corporation Device [8086:7270]
Kernel driver in use: intel_vsec
Kernel modules: intel_vsec
00:14.0 USB controller [0c03]: Intel Corporation Alder Lake-N PCH USB 3.2 xHCI Host Controller [8086:54ed]
DeviceName: Onboard - Other
Subsystem: Intel Corporation Device [8086:7270]
Kernel driver in use: xhci_hcd
Kernel modules: xhci_pci
00:14.2 RAM memory [0500]: Intel Corporation Alder Lake-N PCH Shared SRAM [8086:54ef]
DeviceName: Onboard - Other
Subsystem: Intel Corporation Device [8086:7270]
00:15.0 Serial bus controller [0c80]: Intel Corporation Device [8086:54e8]
DeviceName: Onboard - Other
Subsystem: Intel Corporation Device [8086:7270]
Kernel driver in use: intel-lpss
Kernel modules: intel_lpss_pci
00:15.1 Serial bus controller [0c80]: Intel Corporation Device [8086:54e9]
DeviceName: Onboard - Other
Subsystem: Intel Corporation Device [8086:7270]
Kernel driver in use: intel-lpss
Kernel modules: intel_lpss_pci
00:16.0 Communication controller [0780]: Intel Corporation Alder Lake-N PCH HECI Controller [8086:54e0]
DeviceName: Onboard - Other
Subsystem: Intel Corporation Device [8086:7270]
Kernel driver in use: mei_me
Kernel modules: mei_me
00:1a.0 SD Host controller [0805]: Intel Corporation Device [8086:54c4]
DeviceName: Onboard - Other
Subsystem: Intel Corporation Device [8086:7270]
Kernel driver in use: sdhci-pci
Kernel modules: sdhci_pci
00:1c.0 PCI bridge [0604]: Intel Corporation Device [8086:54ba]
Subsystem: Intel Corporation Device [8086:7270]
Kernel driver in use: pcieport
00:1c.3 PCI bridge [0604]: Intel Corporation Device [8086:54bb]
Subsystem: Intel Corporation Device [8086:7270]
Kernel driver in use: pcieport
00:1c.6 PCI bridge [0604]: Intel Corporation Device [8086:54be]
Subsystem: Intel Corporation Device [8086:7270]
Kernel driver in use: pcieport
00:1d.0 PCI bridge [0604]: Intel Corporation Alder Lake-N PCI Express Root Port #9 [8086:54b0]
Subsystem: Intel Corporation Device [8086:7270]
Kernel driver in use: pcieport
00:1e.0 Communication controller [0780]: Intel Corporation Alder Lake-N Serial IO UART Host Controller [8086:54a8]
DeviceName: Onboard - Other
Subsystem: Intel Corporation Device [8086:7270]
Kernel driver in use: intel-lpss
Kernel modules: intel_lpss_pci
00:1f.0 ISA bridge [0601]: Intel Corporation Alder Lake-N PCH eSPI Controller [8086:5481]
DeviceName: Onboard - Other
Subsystem: Intel Corporation Device [8086:7270]
00:1f.3 Audio device [0403]: Intel Corporation Alder Lake-N PCH High Definition Audio Controller [8086:54c8]
DeviceName: Onboard - Sound
Subsystem: Intel Corporation Device [8086:7270]
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel, snd_sof_pci_intel_tgl
00:1f.4 SMBus [0c05]: Intel Corporation Alder Lake-N SMBus [8086:54a3]
DeviceName: Onboard - Other
Subsystem: Intel Corporation Device [8086:7270]
Kernel driver in use: i801_smbus
Kernel modules: i2c_i801
00:1f.5 Serial bus controller [0c80]: Intel Corporation Alder Lake-N SPI (flash) Controller [8086:54a4]
DeviceName: Onboard - Other
Subsystem: Intel Corporation Device [8086:7270]
Kernel driver in use: intel-spi
Kernel modules: spi_intel_pci
02:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I226-V [8086:125c] (rev 04)
Subsystem: Intel Corporation Device [8086:0000]
Kernel driver in use: igc
Kernel modules: igc
04:00.0 Non-Volatile memory controller [0108]: Micron Technology Inc 2550 NVMe SSD (DRAM-less) [1344:5416] (rev 01)
Subsystem: Micron Technology Inc Device [1344:1100]
Kernel driver in use: nvme
I doubt you can do this with a synology and its absolutely proprietary OS.
I’d say that once you get the hang of proxmox, it’s easy to get started with any number of VMs or LXC containers, including a home assistant, pi-hole and others. Proxmox can serve as a NAS if you make it a NFS and SMB server.
The only question arising then for a forbidden router, is how well proxmox would run off of an emmc drive (freebsd seems to be just fine on my h3+ on emmc - for now). But if you have a separate router (idk, a thinkpenguin TPE-1400 running librecmc?), then you can dedicate the whole h4 ultra to just virtualization and serving as a NAS (the NAS part uses very little resources).
I don’t remember your requirements, other than wanting to have a portable setup. I’ll take a picture of my setup tomorrow maybe, to show you what I’ve got an how small it can get. With just what you enumerated (home assistant, pi-hole and plex / jellyfin, an h4 ultra is more than plenty, with the caveat that you’d need to run jellyfin or plex in a lxc container running the same intel linux driver as proxmox and allowing the container to access the GPU to get the GPU acceleration working, for transcoding - I believe there’s tutorials online).
I’ll admit, synology makes plex setups easier, but with jellyfin you have to add random online repos to it and get into a bunch of other stuff to get it working. And plex requires you to pay for GPU acceleration IIRC. So if you want to save a buck and don’t mind getting your hands dirty, jellyfin in proxmox is a good option.
Alternatively, you can install jellyfin straight onto proxmox (no containers or VMs) and it should work just fine and be less of a hassle.