Proxmox as a Host, How is it?

I am looking at setting up my servers for full time use. No part time BS. So, I thought about it, and unless I want to use Windows as a host, which I don’t, I just want to use VM frameworks. I want to be able to run whatever applications I need on either machine, so what the hell, what makes more sense than proxmox?

Here are the specs of my machines.

Dell Optiplex R620

1X 2690 @ STOCK
4X 2X4 16GB DIMM
2 Built in NIC’s + LOC
PERC H710

  • 2X 300GB SAS (dell specific till I update the firmware)

  • SD Card / USB / DVD Boot

  • 4 HDD Slots front, none internal
    I have another 2690 that I can put in here but for now it is being used in my desktop, and will be until after this is set up

Dell Optiplex R510

2X 6c12T X5600 series xeons

  • actually useable PCIE slots that can take GPU’s
    96GB ram, same dimms
    Built in 4 port nic + LOC
    2X 3TB SAS + whatever other drives go in
    PERC H710 - 12 slots

Basically I need to know what yall do with proxmox and how consistent it works for you. I need to run Windows 10 VM’s on the 2690 with tools such as RegShot and LOGNT32 running, and if on top of that ProxMox had a memory reader that would be dope.

On the older platform I need to run my services, such as my firewall, as well as other platform tools. As the machine has a 4 port NIC, would I be able to pass individual ports? Or since they all go to one controller, I think, would I need to go get another nic or two?

Or, would it be better if I put the other 2690 back in, used the 620 for services as it definitely has 2 individual NIC’s, and use the 510 for development work? I just don’t like that its X series based, even IF its the best line of them.

1 Like

Proxmox is just Debian with a web gui for managing KVM VM’s.

Does what it says on the tin most times.

7 Likes
  1. Workstation

1.1 Overview
I use a simple proxmox setup as workstation, except for enabling vfio I keep it stock, so nothing breaks during updates. Proxmox performed best for me when it comes to latency and overall performance. I don’t bind any devices to vfio but I blacklisted all GPU drivers on the host except intel. I don’t use any clustering or failover because I switch this PC off when it is not in use. I use it as “2 users one PC” setup. in order to run 2x HW accelerated GPUs and VMs at the same time.

1.2. Hardware

  • Intel i9 9900K, 64GB DDR4 RAM
  • Fractal Torrent Case
  • 1x Geforce 3080 ti in 16x PCIe slot
  • 1x Radeon RX 6600 in 4x PCIe slot
  • 2x Rensas PCEe USB 3.0 cards, one connected through a right angled PCIe connector and screwed inside on the backside of the case, USB 3 ports are routed outside with Delock USB 3.0 bracket cables
  • 1x Seasonic TX Prime 1000W PSU
  • 1x msi z370 a-pro

1.3. Storage

  • 1x Spining Rust WD Red 4TB
  • 2x Samsung QVO 2TB
  • 2x Samsung Evo 850 500 GB
  • 1x Samsung Evo 970 nvme 500 GB
  • 1x Samsung SSD 830 as boot drive
  • The SATA SSDs are in ZFS pool with 2vdevs in mirror. The others are standalone

1.4. VMS

  • 1x LCX Container running Debian for samba / filesharing and Linux Onedrive client. So I can access my data even when I am switching VMs. I added an internal network bridge for this.
  • 1x Gaming with vfio passthrough
  • 1x MacOS running on RX 6600
  • various VMs like Windows 8 for older games or for a second seat on the same PC
  • 1x Linux Manjaro VM

1.5. USB Routing / Switching
I have a chinese no-name USB 3.0 Hub (it is actual for a 5 inch bay) with 8x USB ports which accepts two different USB sources. So I get 4x USB ports for each of my active VMs. I simply put some rubber feet on it and placed it on my desk. Additionally I use a Roline USB 2.0 switch in order to switch between 4 USB sources (card 1 on VM1, card2 on VM2, internal > for administration on the host or Mac OS VM, second machine I sometimes have there).

If I have the muse I’ll write about my even more boring storage server wich uses Proxmox.

4 Likes

Interesting, I want to have WOL setup on all my servers and render appliances so I have them when I need them and I don’t have to wait 10 minutes.

please do I need info lol

2 Likes

Boot time is in the realm of „I started a OS from spinning rust“ approx 30 seconds. I put my Linux VM with vfio as autostart. Basically the PC behaves like a normal desktop PC, I switch it on and get to my desktop.

My storage server boots from nvme and this feels nearly instantaneous

Storage server

2.0. Overview

This machine collects my previous hardware. It sits in a old coolermaster CM690 II case with a Silverstone 4 bay SATA enclosure for Hotswap. I used this for running a GPU accelerated Mac OS VM, but my AMD RX 580 died so I moved the VM to my main PC and swapped the GPUs / brought the rtx 270 over.

2.1. Hardware

  • Mainboard msi z390-a pro
  • CPU i7 8700k
  • RAM 16GB DDR4
  • be quiet 650W PSU
  • Zotac GeForce RTX 2070 mini

2.2. Storage

  • 2x WD Red Pro 8TB
  • 2x Toshiba Enterprise 15TB
  • 1x Samsung 840 pro 128GB
  • 1x Samsung 980 Pro 1TB nvme in a PCIe 4x addon card
  • 1x Kioxia SSD 256GB nvme I salvaged from a dead HP Notebook, Bootdrive

The WDs and Toshiba hdds are each in a mirror vdev these form a zfs pool with the 840 pro as l2arc cache. This saved my ass, because previously I used old 4TB desktop hdds which failed one after one. I could replace each, keep the pool and later expanded it by replacing the remaining old hdds with a bigger ones and resilvering the pool after each replacement.

Running is currently a Debian lcx Container which has access to a storage dataset and shares it via samba. I use this as VM backup share for my main machine and data grave. Currently living is a VM of my previous bare metal Windows installation which I keep around until I got everything of of it.

Because I cannot live migrate VMs with this setup, I have to backup and restore them on the target machine.

Plans: I will remove the GPU because I have not much need for it and replace it with either with a flashed LSI raid/controller card I got from a decommissioned HP server for additional, I think, 8 SATA Ports or a 24 port Broadcom adapter with a icy dock 24 device enclosure.

I plan to run jellyfin, tailscale, nextcloud, a online book reader / storage on it to get rid of third party dependency. Maybe I move the Pihole instance and my unifi cloud controller there as well - currently living on a raspberry 3b.

2 Likes

My biggest peeve with Proxmox is, that adding / removing PCIe devices breaks network functionality, when the Ethernet device descriptor changes. The fix is one config file away (the config of the default network bridge) but man, does this suck if you are running a headless server. With my consumer shit I still have the intel iGPU as fall back, try this with a fully loaded Epic or Threadripper system or even a Ryzen without GPU.

2 Likes

ive been using proxmox forever i still have my first install going from about 8 years ago. only issue i had is i acidently fucked the bootloader (100% my fault) on a major update (well several i kinda left it running for several years with no update ). But i was able to repair it. at least for home use i use it to let my less hardware minded friends have free cloud resources and in return i get i prod like enviroment and angry messages if anything breaks (power outages are the main issue ive had because i dont have it on ups or if i break my vpn they use to access it). my main pain with it is if your not paying for a lisence you have to maually add the community repos since other wise the apt updates breaks because it trys to use the enterprise one and a annoying pop up that complains that your not lisenced.
tldr its been solid as a rock
its running on a now ancient hp proliant dl360

  • cpu 2x X5570 4 core
  • ram 141GiB
  • storage random assorted hardrives

For work
i also run and manage 6 seperate proxmox clusters spread around the world each with 4 machines for storage im using proxmoxs managed cephfs.
There all super solid and pretty much never get tickets related to them and before i got them they were neglected for years and still ran solid.

the main con i have with it is its terraform support is hot garbage.
other wise just make sure you set your vms to auto restart espisccly if your on unstable power i forgot a few times and it was always a pain when i did.

2 Likes

Wouldnt you have access to the serial console or ipmi,ilo.bmc,etc on a headless server?

That’s so funny man. I say the same exact thing about cockpit and Red hat

The cockpit really is just on top of a red hat system proxmox for Red hat. Does what it says on the tim most of the time

:joy::joy::joy:

+SElinux automation


Anyways @FaunCB proxmox is pretty good if you’re okay with the overhead of a VM, but you have to realize what you’re getting into a VM costs resources. And usually if you’re going to build a virtual machine system, you build a much beefier system than normal in order to help with that. It’s a great way to manage your appliances. In my opinion I got into the whole container thing and it’s been a pain for me So just know what you’re getting into before you do. It is all

If I may speak for my own personal experience, VMs are better than containers, but nothing replaces bare metal as an installation medium

2 Likes

Yeah thats why I’m doing my shit on a server in a web browser instead of with hyper V

1 Like

On server HW, sure - on workstation HW this could get annoying, especially if you test something and you have to fix it everytime the configuration changes. On consumer grade HW you can‘t even set the default graphic output on most boards.

2 Likes

Working daily directly with my Proxmox system, even with games and apps which require low latency, I don’t notice any difference to bare metal, except the increased boot time.

I mean, most PCs are idle 90% of the time, why not run / host multiple things on it?

VMs keep the environments separated. Thin provisioned VMs take even less space than bare metal installations. Additionally I can snapshot and roll back anything I like, plus I can compress the hell out of used hdd space with ZFS. If you like container you could run lcx on Proxmox, otherwise nobody is stopping you loading a Linux VM full of docker container.

Hardware requirements:
It really depends what you are stuffing into your machine.
Two GPUs are taxing on case cooling, even if you had a bare metal setup like GPU1 for gaming and GPU2 for,let’s say, compute tasks.
The higher requirement I encountered is RAM, additional USB interfaces if you use vfio and maybe a USB switch, if you don’t use a software solution.

Software shenanigans:
I can simply restart hanging VMs most of the time. VMs also allow to install a non modified Windows 11 on systems without tmp chip, because Proxmox / KVM can emulate/virtualize that.

3 Likes

Honestly bare metal as a installation medium is the worst at least in my experience though still getting into learning pxe so maybe my opinion will change Its just generally harder to automate and have the same consistent system for scripts to deal with where with automation and vms or containers you can spin up 1000 servers in minutes all with the same environment. though alot of that it being easy falls apart when your the one who has to manage all the baremetal thats running that api your talking to.

I use VMware at work. At home I used XCP-NG for a while. Tried out Proxmox and am moving my stuff over to it and will get rid of XCP-NG. For a hypervisor I want something I can rely on, I’ve had too many issues with XCP-NG.

1 Like

Very interesting to see your setup… do you have another server ?, can you share your specs for another server please… ?

Are you running MacOS inside proxmox ?, and it is RX 6600 really enough for MacOS ?

Yes, I am running MacOS inside Proxmox, because it is easier (for me) to virtulize the system than assembling and maintaining a bare metal system for Mac OS. Plus I can rollback if I mess things up. The RX 6600 is pretty high end compared to anything on a stock Intel based Mac. Don’t mistake it for the rubbish RX 6500. Former versions like Big Sur could be run on GPUs like the Nvidia GTX 670, 760, or even the 730 I think. Mac OS doesn’t need that much HW resources - I mean, Big Sur ran on Mac Book Airs from 2011-12ish.

This blog is a good resource to get Mac OS up and running in Proxmox:
https://www.nicksherlock.com/2022/06/installing-macos-13-ventura-developer-beta-on-proxmox-7-2/

Here are some (outdated) photos of my system:

My specs for my 2 Proxmox system are at the beginning of this thread.

1 Like

I used Proxmox at work and at home. While it is “just Debian” with a fancy GUI, I would not describe it as that. Managing Proxmox, besides doing an apt update and dist-upgrade from time to time does not feel like managing Debian. You mostly use the GUI, but even when you use the CLI, Proxmox has its own tooling, like qm for VMs and pcm for containers (basically wrappers around QEMU and LXC made by Proxmox). On Debian, you would be using virt-manager and either direct lxc-* commands or lxc (if you use LXD).

The biggest difference between Proxmox and Debian I would say is the kernel, Proxmox has just so much more stuff in it. Which in this case, it is a good thing, you get ZFS and Ceph by default.

So it just becomes your preference on how you want to administer the VMs inside it. I moved away from Proxmox to virt-manager. I have some quirks with Void, but all are easy to workaround. Running Windows in a VM, I have not set another VM for the second GPU. I mostly go in via SSH and virsh start VMs.

You can enable auto-start of VMs on host startup on both proxmox and virt-manager, but proxmox gives you a bit more control via the GUI if you want to create a dependency table. Say that your host runs DNS and DHCP in 2 VMs, and a VM needs those to get its NFS rootfs from a NAS. You want the start order of your VMs to be: DHCP VM, DNS VM, your whatever other VM. Proxmox gives you the option right in the GUI.

With virt-manager, you can do that with virsh start in a script, but you get more finer control of what you can do. For example, Proxmox only verifies that the VM is started up, then starts the next, but with a script, you can do some basic checks, like SSH into the VM in the script, echo something and once you get an echo, you start the next VM. Or even better, you SSH in, do a service check on dhcpd for example and if it’s up, start the next VM.

You can do this with Proxmox too if you go in the CLI, just replace virsh with qm and the VM name with the VM number / ID and you’re good.

There are more differences, but I would say that Proxmox is the easier one to get into and offers you some sane defaults (like ZFS) right out of the box. For businesses, it makes clustering really easy.

1 Like

I was curious, can you join multiple physical server to one VM together in Proxmox ?

Yes… I was watch some peoples use hackintosh for IOS software development, my friend still use RX 550 for that purpose…