Moving to Proxmox and considerations on hardware

Hi all! New to this forum and I sorry if this post goes under wrong subforum.

I’ve been running a Ubuntu Server for internal network services (media and NAS stuff mainly) for the past few years. My build has gone through few iterations. Starting with i3 8100 and ended up with E5 1620V3 (MB: Supermicro X10SRL-F, RAM: 128Gb). I’ve had no complaints with my current platform decision, but now I’ve started to dip my toes to the virtualization world and I feel that I am in the crossroads.

The 4C/8T CPU was fine for my needs, but now I would estimate that I would be running atleast 4 VM’s in the future. My estimation would be that I might be running out of threads if any of those machines would actually use some resources. My current guess for the virtual machines-to-be would be:

  • 2x Ubuntu Server 22.04.1
  • TrueNAS Core (or Scale, still considering which)
  • Homeassistant
  • Nginx Proxy Manager LXC Container

Take in consideration that this would be the starting point. TrueNAS would be for my two ZFS storage pools and I guess it will be the biggest resource hog of the bunch at the beginning. I am already salivating at the possibilities, at minimum a PiHole VM would be following close behind.

I also have quite a few PCIe devices consisting of:

  • SAS2008 HBA IT-mode
  • HP SAS Expander
  • Mellanox ConnectX-3 (1 port card)
  • Asus Hyper M.2 Gen4

This amount of PCIe devices probably limits me to “proper” Server platforms. So, what should I do? Upgrade the current CPU for something with more cores (8C/16T and above) or just ditch the current platform and go for Epyc Rome (somethin like this)? Electricity isn’t cheap now here in EU so power consumption under moderate to light load is of great importance to me. Currently I have my Haswell sipping just 55-60W (without HDDs, with HDDs ~100W).

I wouldn’t want to talk you out of an epyc option, though from your writing your concern on threads seems a theoretical problem at least at the moment. Is it not worth building the workloads on the hardware you have - even without rebuilding proxmox on the host now you can move the disk images from kvm to proxmox later when you know what the resources are.

a pihole is not going to crush a core all on its own, for that matter running it on a pi zero might might run less power than the fans.

And if you need more cores - if you still have it use the i3 with disk images on network shares (you already seem to plan or have a 10G card) from your truenas or the proxmox host. Yes it’s an additional computer ( if it turns out you need it ) but to state the obvious, spending on replacing hardware also costs money.

But if you do want to replace the host hardware as impetus for other projects I can’t say it looks bad at all (it might after tax).

I’ve had some time to think of this after the writing and just to be honest to myself the Epyc is more of a “want” than a “need”. My current workloads hardly justify my current hardware and even if I split the services to different VMs in the future I would still probably be fine. I’m just clinging on the word “probably” a bit too hard :smile:

Could you maybe clarify this part a bit? I have some trouble understanding what you mean by this.

I unfortunately do not have the i3 anymore, but I do have a Lenovo SFF M920q with i5 8500T running as my Plex/Jellyfin server. And yes, I do have the 10G Mellanox card up and running on the server.

If I understood right you would prefer to run the actual VMs on Lenovo rather than converting my current main server to Proxmox host? I haven’t really thought of that as an option yet.

One option would be just sell the Lenovo after main server is up and running. It has been on a really light load the whole time and if I went with Epyc, this would be a easy consolidation. Also, Plex/Jellyfin are not that necessary that I couldn’t live with a little downtime. One of my long term goals is to consolidate as much as reasonably possible so the needs for separate servers is minimal.

Could you maybe clarify this part a bit? I have some trouble understanding what you mean

I mean to say, if you have a question on threads, to virtualise all your workload now, on the current Ubuntu Host - you can use same virtual disk image later on proxmox (I wouldn’t change the host every day, but I’ve gone both directions). Then you can gauge resources based on what they really do.

I wouldn’t say I ‘prefer’ running any overflow VM’s on a spare i5 but it’s an option involving no capital. I’d note that it may quite be possible that i5 will run per-core faster - another variable.

If the goal is consolidate - maybe your haswell can already handle it. if it’s too constrained, you have options of a cpu upgrade to your existing platform, or the epyc or something else (I think you certainly want a server platform, as you said). but you might find (utilization spikes asides) you need less cores than you think.

Ah got it now!

I fully agree and have actually went this route. I’ve started to prepare some VMs in KVM so that I can do some estimations how many machines are needed.

The whole Epyc or not discussion is still ongoing. I am somewhat reluctant to invest in Haswell, even if a processor would be cheap. Epyc CPU and motherboard is not that much money, use pretty much the same amount of power and provide much more in terma of platform.

But I think I’ll do the following.

  • Build a few VMs, migrate services and set them up for a while.
  • See if my current Haswell is up for the task.
  • If not, go straight to Epyc and enjoy the abundance.

Replying to myself to just keep a log of things.

I migrated some of my services to a few VMs running (in KVM) using as little amount of services and resources as I could manage. The CPU is already running at constant 10% and the load is very light. I’m still far from being ready with the VMs, I would imagine I’d need atleast 2 more (one for TrueNAS and another for services) which would be the most resource intense ones. At this point I would estimate that CPU load would be something in the range of 20-30% just idling.

Maybe there are some optimization to be done, but it can’t have that great of an effect when VMs basically just running the idling OS. The only thing I could think of that would help, would be to decrease the amount of VMs but I kinda need a certain amount of them to make this project worthwhile.

So now I got myself an excuse to buy new hardware! Just ordered H12SSL-I and 7302P which could probably run 3x my current max load without breaking a sweat. Yes I know, it is probably too much horsepower for my needs, but that just gives me more room to play with. After all, this is a hobby :slight_smile:. This also gives me a chance to consolidate my SFF Lenovo from the Plex/Jellyfin duties. Kinda hard to justify running it now when I have enough CPU to just software transcode a occasional stream.

1 Like

i will just say that the amount of stuff i can run on my 7351P is EPYC.

I’ll see myself out.

2 Likes

This is a bit off topic, but I was wondering how you set up your plex/jellyfin server? I have the same computer (mini) with proxmox on it running a pi-hole container and was thinking about setting up jellyfin and plex to take the load off my synology. Do you have a guide you would recommend?

The SFF PC is just running Ubuntu (22.04.1), not Proxmox. Plex and Jellyfin are both running on bare metal and media is mounted with network shares. Really no need for putting those in containers in my opinion. Just a very basic setup IMO, I don’t have a specific guide for this.

Back again for opinions.

The progress has been slow. I have now migrated my hardware setup to Epyc. First motherboard was actually DoA which made the hardware swap take a few more weeks. And man this thing is a beast. I actually dropped the Lenovo from my homelab because the Epyc can chew through anything I throw at it to transcode. My transcoding needs are quite small, the criteria was to survive an occasional 4K HEVC stream with tone-mapping and that is no problem.

On the software-side of things the progress has also been quite slow. I’ve now migrated 90% of the running services to different VM’s. I now have following VM’s, the host OS is still Ubuntu 22.04. I will make the swap to Proxmox when all services that can be on VM’s are on VM’s.

  • Home Assistant
  • Lubuntu VM for various services (might change this to Ubuntu Server VM but we’ll see)
  • Ubuntu 22.04 VM for Media Server (Plex/Jellyfin)
  • Ubuntu 20.04 VM for various services (Searxng, Vaultwarden etc.)
  • TrueNAS Scale 22.12 VM for network shares and ZFS stuff (still setting this up)

The odd selection of various Ubuntu versions has some reasons. Lubuntu was just a wildcard I wanted to try. Wanted something lightweight and try a different flavor for once. It has been fine but I see no real advantages over Ubuntu Server for my uses.

The Ubuntu 20.04 was selected because I run one software that requires older version of Python that 20.04 is still running. I decided to offload some other lightweight services to this VM from the host since I had it running. Somewhere in the future I will upgrade to 22.04 and if that breaks something so be it.

When setting up TrueNAS I find myself in front of a decision with Nextcloud. Should I run Nextcloud inside TrueNAS (as Docker) or should I run Nextcloud inside my Ubuntu 20.04? Running on Ubuntu would be more familiar for me, it gives me more options for install (Docker, Snap or Manual install). Running on TrueNAS has the benefit of initial ease of installation, but I have no experience of running containers on TrueNAS.

Soooo. Any opinions on that one?

Gave you a like on that post even though you should be tried for a war crime.

Epyc… :roll_eyes:

i am running my nextcloud as a manual install on a debian VM. this gives the most flexibility in configuring the storage. IE: you could mount extra storage on the VM and hand it over to nextcloud very easily. while this is possible in the docker config, it is a lot more work.

as you will have a VM host, there really is no need to nest things unless you just want to try it. TrueNAS Scale is a capable VM host, it pales in comparison to a real KVM install.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.