Advice for New Server Build (hypervisor/services machine)

Background:
My current server has been running Unraid for a little over 1 year. But I am currently planning to move to ubuntu/ubuntu server. (this could change, open to suggestions)

CPU: i7-4790k
Motherboard: Z97 MSI Gaming 7
RAM: 32gb ddr3
Storage: 5x 4tb HDD (1 drive parity), 500gb SATA ssd for cache

I am building a new server but also splitting into 2 separate servers.

Storage Server: (this will run zfs pool, file sharing, GitLab, and maybe some other small things like ddns or adguard)

(parts I already have)

  • CPU: i7-8700
  • Motherboard: MSI Z370 Gaming Plus
  • RAM: 32gb ddr4 (might get more)
  • Storage: 4x Seagate X20 20TB drives (raidz2, these are new and hopefully the most storage I will ever need) + 2x Crucial MX500 500gb (maybe zfs mirror or raid1) + extra 500gb ssd I have laying around for cache.

I am going to use the old Unraid server hardware for now but I will eventually build a new hypervisor/services server in the next few months.

Currently the idea is:

(Hypervisor/Services Server)

  • CPU: Ryzen 9 7900
  • Motherboard: ???
  • Cooler: Noctua NH-D9L (compatibility with any future rackmounting)
  • RAM: 64gb ddr5 6000 (not sure if 6000 is necessary/overkill)
  • Storage: ssd boot drive (maybe 2 in zfs mirror/raid1?) + (undetermined ssds, maybe 1 1-4tb ssd or maybe another mirror/raid1?)

Questions:
I am mainly looking for motherboard/storage recommendations for the hypervisor/services machine but I am also interested in any thoughts/holes you can poke in my plan.

I am open to all suggestions. Thanks in advance!

Well, I do see one big problem;

Energy costs

The first thing I look at with a new server setup is the power draw - running something 24/7 is racking up my power bills plenty fast so efficiency quickly becomes a factor. Every W running 24/7 is consuming 720 Wh every month, or 8.64 kWh a year.

This makes it trivial to calculate how much a single watt costs you per year, but this will depend on your electricity price in a linear fashion. Just multiply your $/kWh with $8.64. In my case I pay 15 cents per kWh, so energy draw costs me ~$1.3 per watt.

The i7 you have there will draw 35W in idle, add another few W for motherboard and drives and we’re talking an average wattage of maybe 60W. This server will thus cost you ~520 kWh a year to run.

I would attempt to tweak that downward as much as possible, maybe even consider investing in new hardware here. The Ryzen 7900 has a much better energy footprint but again, we’re talking about a system draw of 65W-70W or so, which is another 604 kWh per year.

I want to also mention that modern NAS m.2 SSD systems like the Asustor Flashstor draw roughly 15W-20W for the system bringing the energy costs down to 173 kWh per year. So, this is something to think about but for now, you have what you have and it will have to do. After all, an imperfect solution now is better than waiting an eternity for a perfect one. :slight_smile:

Motherboard

If you want all the server features and bells & whistles with IPMI and ECC support, the AsRock B650D4U and Supermicro H13SAE-MF are both good options for ~$500.

If you want ECC support, most ASUS boards support ECC and they start at $200.

If you don’t care about ECC support either but want small form factor, the Gigabyte Aorus B650I Ultra is a top pick by many for a reason.

Otherwise any MSI, Gigabyte or Asrock option above $200 is pretty much good enough.

Storage

Dual 4TB m.2 SSDs. No need to mirror - you need to shut down the system regardless and SMART tools are good and reliable enough that SSDs just dying on you without warning are exceedingly rare.

You can even set up the system to boot into a RAMdisk quite easily, at which point the entire Linux server boots from a 200MB compressed image in the EFI partition, and is more or less immutable besides that. I do not recommend you do the same to the apps running on the server, but…

Since you asked.
What benefit do you expect by taking your existing hypervisor (unRaid) down and building two separate phy servers?
You could run the zfs server you mentioned as a vm inside unraid or zfs pools in unRaid and the vm’s mentioned. Everything you itemized can be done in unRaid or any hypervisor. Curious why you want to spend more $ and manage dual systems vs one.
All of it on one phy system.
Biggest benefits would be:
Power cost
Management
Phy space

Its your project and $. Share the journey, everyone appreciates reading the adventures.

Well congrats! you’ve convinced me, I think I’m going to go with a single server.

  1. I haven’t used proxmox before but I’m going to play around with it this weekend and see how it goes. (This I think would still give me the separation that I want between the storage and applications while being on the same physical system)

  2. I haven’t had anything with IPMI before so I don’t necessarily need a server motherboard. (I might try a pikvm in the future for something like that so that it can be used for multiple machines too).

  3. I thought about ECC before but I’ve decided I’m ok with out it. (I’m still open to reconsider, but I don’t think it will be necessary). ddr5 has some level of ecc built in the modules and zfs should catch anything related to bit rot or corruption right? So as long as I maintain proper backups I don’t think ECC is that big of a deal for my use case. Are all of these assumptions correct?

The main benefit I was thinking of was separation of storage and the applications. This would allow me to make more drastic changes or mess with things without worrying about also messing up the storage in any way.

There have been a couple points where I would rather just re-install the os with a clean slate than fix/clean up something I messing with or trying for the first time. Or even switching to a different OS.

That being said I think the best way forward for me is to try proxmox. This will allow me to have this flexibility while also being contained on 1 physical system. I can set up my storage in 1 vm, and applications in another. Plus if I ever want to try out something new I could always spin up another vm temporarily that can be destroyed when I’m done.

As for switching from unraid, I just want to try something different where I have a little bit more control. There were some things like running cron jobs where I had to find workarounds using plugins since changes to the base OS are not saved upon rebooting. (since it boots from a usb) I also want to learn some more about linux in general so I think moving to Ubuntu in a proxmox VM might be a good challenge for me.

Let me know what you think.

1 Like

The logic is sound.

The only item mention and you asked if it was accurate related to ddr5 error correcting. It is my understanding no memory can do the same level of ecc (error correcting) as ecc dram with the extra chip because it has that extra chip and is built for that. That said, i agree for our home/lab and with the zfs etc… its not an issue.

Since your looking to learn new things you will probably enjoy proxmox. Its been rock solid for me.

Keep us posted on your progress.

So I’ve played with proxmox for a day or 2 now and I’ve made some progress but also discovered some new issues.

So far I’ve spun up some different VMs, messed with zfs pools, and created an lxc container running cockpit for management with the file sharing module for smb shares.

This could work but I don’t really like the idea of having to use smb shares over the network for my docker VM. Plus I was planning to set up each docker container to run with its own “service” account/user which I don’t think would work with SMB since you connect with a single user and all actions go through that smb user. (if app1 writes to the smb share and then app2 writes to the smb share both will have the same owner right?)

A second option I could try looking into is passing the hard drives into the VM itself and then setting up a zpool in the VM. Although I don’t have a pci card like an HBA just the built in SATA controller. I also don’t know all the pros/cons of doing this.

So the main issue I am having right now is with storage access which I would ideally like to be bare metal. SMB just seems like a lot of overhead for docker containers that will be running on the same physical machine. I’m open to any suggestions regarding the storage situation.

In an ideal setup I would have a bare metal ubuntu server running the zfs pool, cockpit, and docker containers and then proxmox for any VMs that I want to create but also on the same machine.

Other than that I have been enjoying proxmox itself, the UI is really intuitive and its really easy to spin up VMs which is awesome.

Another option I could try it installing everything related to the storage server on the proxmox host itself even though from what I’ve read this idea seems to be a cardinal sin. But if it works that could give me the flexibility that I’m after.

ZFS, user/permission management, and docker containers would all be managed on the host OS but I would still have the ability to create VMs if I need.

Everything on the mentioned above would be configured with ansible most likely so in the event the server explodes or I just want to move to another machine I should simply have to import the zfs pool, and run the playbooks again to set everything up.

I don’t know the reasons why this would be a bad idea but it seems like most of what I read points to this not being a good idea.

If I do end up doing this I might have to learn more about debian, I tried it a while back but ended up switching to ubuntu because it seemed like there were a lot of hoops to jump through installing/setting up software.

As always I’m open to any suggestions.

Our use cases of proxmox are diff so i cannot attest to most of your needs.
It is my understanding of disk only passthrough, works fine, stable, good performance but without the hba (controller) the VM is talking to a QEMU so you lose some abilities like reading the hdd SMART.
Maybe its worth dropping in a used hba card and attaching the drives to that which you plan to have used for storage. Pass that through to a VM and do all the zfs/smb from inside the VM?

I’ve never purchased an HBA card before but I’ve learned that 45 drives uses an LSI 9305 16i for their servers. I found this one on ebay.

Is there anything else I need to check before I purchase? I’m not very familiar with what to look for/check when it comes to something like this.

I think an HBA is probably a good idea, I can try out passing the entire hba to the VM and even if I don’t end up going that route this will surely be used at some point or another.

Thats a fancy card sir. If your budget affords, enjoy!
I was thinking something without hardware raid because you mentioned running a zfs pool. You can do both but i see that as diminishing returns. With the cables is convenient.

Call me thrifty but save the extra cash and put it on something else.

I did end up getting the LSI 9305 16i. It will be here this weekend.

Main reason is that is just what 45 drives said they used so it appears to be a tried and tested way + it will future proof me if/when I want to go rack mount and get an HL15 case from 45 drives/homelab.

I checked and the newer temporary system made from already owned parts (i7-8700, 32gb ram, the 4 20TB drives and a couple ssds) is sitting at around 38-40w which is nice. I will stick with this for now while I gradually migrate everything running on my unraid server over to VMs in proxmox and then maybe down the road I will go for the 7900 on AM5 (if needed/wanted at that point).

I set up all the of the docker containers today, just going to create ansible playbooks and things of that nature for automated setup and then decommission the unraid server once everything is running as expected.

1 Like

Congrats, sounds like a good build and fun process.

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.