Personal VM Server

From what I’ve done on my end, it looks as though the FirePro S9300 x2 behaves well in a macOS guest (at least Mojave) on vSphere. From what I’ve watched online, the FirePro S9300 x2 should also behave when split up between multiple Linux KVM guests (Windows 11 - may also apply to Windows 10). Pretty sure this card runs just fine in a Linux guest as well. In all of the tests/scenarios that I’ve mentioned, the FirePro was flashed to either act as a Radeon Rx Fury or Nano (consumer variants) - though the Radeon Pro Duo also existed. I’m thinking that BlissOS could just be an outlier in this case, and a rabbit hole too deep for me to go down for this project.

As such, unless a software update for BlissOS fixes this oddity before 2023, I’m kicking it from the project for the next year or two. I’ll be focusing on LibreNMS as the last major task for this phase of the server project, until I move to the Gen9 server. When I move to the Gen9 server, I’ll possibly want more of the FirePro S9300 x2, oddly enough. While it’s an old card, it also fills in a gap - the need for multiple GPUs in a single PCIe slot, for a (relatively) affordable price. Its space efficiency and monetary benefits are tough to ignore when SR-IOV and GRID are currently either too expensive for me to implement or locked behind secret handshakes and the need to be a cloud provider.

I’ve installed LibreNMS, haven’t learned how to get device auto-detection working yet. Installed Cronicle and used it to resolve a scheduled task(s) issue with Nextcloud. Now working on enabling Nextcloud push_notify and learning more about LibreNMS. BlissOS is gone from the project, and I’m closing in on the last major tasks of this phase of the server project. The next phase requires the Gen9, and I can’t hop onto that just yet. Also wanting to get a 2nd FirePro S9300 X2 and a Titan RTX…

1 Like

In the wake of still not having figured out LibreNMS’s device/host auto-detect, I’ve gone on and added many of my commonly-accessed app/service IPv4 addresses by hand. Those include:

  • OOB management appliances
  • multi-node/cluster management instances
  • individual virtual machines
  • hypervisor hosts
  • default gateway for network bridge

I’m also running a simple/quick nmap scan, to look for any obvious hosts that I missed. I’ve avoided adding:

  • Docker containers
  • switches that comprise the network bridge

for the time being. All of my Docker containers are on one VM. If I ever want to analyse traffic for an individual container, I can still add their individual hostnames later. As for the network switches, all traffic going through them either originates from the default gateway or the DL580 itself (either hypervisor host or one of the individual VMs). If the time ever comes, I can add the switches later as well.

I also took some time to review the DNS records on Cloudflare, and should be a little closer to having proper DMARC/DKIM/SPF. Not perfect by any means, before anyone gets ideas. It’s tough to get this crap done right.

Getting Nextcloud’s notify_push to work is proving to be very tough. I was hoping to have that and Spreed/Talk HPB running by the end of the year, but I’ve come to the conclusion that it probably won’t happen.

Started looking into ARM servers, just to see what’s available on the used market. The answer is, nothing affordable - at least in my area. Was wondering if I could maybe play around with ESXi on ARM64, maybe have an AOSP VM or four? Yeah, that’s out the window.

Still waiting to move to the Gen9 in the future…

Converted the CentOS Stream VM to Rocky Linux:

1 Like

I purchased a 2nd FirePro S9300 X2. Can’t wait to see if I can fit it in the DL580 Gen9…

Just ran into this:

Went on and grabbed the Sonnet card, in case this comes up in macOS Big Sur.

I’ve come to the conclusion that VDI (for other users) will have to be reserved for if I can get a second DL580 Gen9 after moving out. I’d end up replacing the current FirePro S9300x2 with a Radeon Pro W6800, and moving all FirePro S9300x2’s to the dedicated VDI host. A single DL580 Gen9 can power up to three of the FirePros, so six available GPUs total for the VDI host if I ever go for it. Assuming that I threw the same 10x HGST HUSMM8040ASS200/HUSMM8040ASS201’s at this host, there’d be a little under 4TB of SAS storage available as well. A mix of GPU-equipped and CPU-only VDI instances would be possible. Windows and Linux only though - I don’t think macOS VDI exists as s supported solution at this time…

Attempting to troubleshoot Wazuh: