Return to Level1Techs.com

Personal VM Server

Hello! I’m a bit new here. If I’ve posted to the wrong subforum, please let me know. Sorry if the formatting isn’t quite right. This is the build log for long-running server project, which started back in 2018. Here are the hardware specifications for the server:

CSE :: HPE ProLiant DL580 G7
CPU :: 4x Intel Xeon E7-8870’s (10c/20t each; 40c/80t total)
RAM :: 128GB (32x4GB) DDR3-1333 PC3-10600R ECC
STR :: 1x HP 518216-002 146GB HDD (VMware Appliance, System ISOs) +

  • 1x Seagate Video ST500VT003 500GB HDD (Remote Dev. VM) +
  • 4x HP 507127-B21 300GB HDDs +
  • 1x Western Digital WD Blue 3D NAND 500GB SATA SSD (Virtual Flash) +
  • 1x Intel 320 Series SSDSA2CW600G3 600GB SSD

1x LSI SAS 9201-16e HBA SAS card (4-HDD DAS) +

  • 1x Mini-SAS SFF-8088 to SATA Forward Breakout x4 cable +
  • 1x Kingwin MKS-435TL +
  • 4x IBM Storwize V7000 98Y3241 4TB HDDs

PCIe :: 1x HP 512843-001/591196-001 System I/O board +

  • 1x HP 588137-B21; 591205-001/591204-001 PCIe Riser board

GPU :: 1x nVIDIA GeForce GTX 1060 6GB +

  • 1x nVIDIA GRID K520

SFX :: 1x Creative Sound Blaster Audigy Rx
NIC :: 1x SolarFlare SFN5322F
FAN :: 4x Arctic F9 PWM 92mm fans​​​​​​​ *
PSU :: 4x 1200W Server PSU’s (HP 441830-001/438203-001)
PRP :: 1x Dell MS819 Wired Mouse
ODD:: 1x Sony Optiarc BluRay drive

Items with * are already in-house, but haven’t properly integrated into the server yet.

The planned software configuration details are as follows:

*. Temporary task that will be replaced by a permanent, self-hosted solution
** Can benefit from port forwarding, but will be primarily tunnel-bound
^ Tunnel-bound (VPN/SSH) role - not port forwarded/exposed to the Internet
+ Active Directory enabled - Single Sign On (SSO)

Some of these do look strange, I will admit that. I’m currently trying to replace my workstation with a VM server, while also moving parts of my workflow away from Windows 10. The process will be a slow one, and I may still have a few parts coming in the mail.

I think the following distribution of resources will work (feel free to give input):

VMware NIX Appliance :: 24/7 - true, dedicatedHDD - false, dedicatedGPU - false, 2c/4t + 10GB
Temporary/Testing VM :: 24/7 - false, dedicatedHDD - false, dedicatedGPU - true, 12c/24t + 32GB *
Windows Server 2016 :: 24/7 - true, dedicatedHDD - true, dedicatedGPU - false, 8c/16t + 12GB
macOS Server 10.14.X :: 24/7 - true, dedicatedHDD - true, dedicatedGPU - true, 8c/16t + 12-16GB
Artix Linux - Xfce ISO :: 24/7 - true, dedicatedHDD - true, dedicatedGPU - false, 8c/16t + 12GB
Windows 10 Enterprise :: 24/7 - false, dedicatedHDD - true, dedicatedGPU - true, 12c/24t + 32GB *
Remote Development VM :: 24/7 - false, dedicatedHDD - true, dedicatedGPU - true, 12c/24t + 32GB *

VMs marked with a * will never run at the same time.

ESXi itself is usually run from a USB thumbdrive, but I have a drive dedicated to it. No harm done. MacOS and Linux would have gotten Radeon/FirePro (ie., Rx Vega 64), for best compatibility and stability, but market forces have prevented this. Windows Server 2016 can have the server’s on-board audio, if it has any. Windows 10 gets the Audigy Rx. The MacOS and Linux VMs get whatever audio the GRID K520’s provide (either that or a software solution). The whole purpose behind NextCloud is to phase out the use of Google Drive/Photos, iCloud, Box.com, and other externally-hosted cloud services. Windows 10, Remote Development, and the Temp/Testing VM will be put to sleep (or offed) until they are needed (Wake on LAN), since they won’t be hosting any essential services.

Project mirror(s):

3 Likes

Sounds nice, I’ve never seen a DL580 in the flesh, biggest I’ve seen is DL380’s in a SAS multipath failover cluster.

https://web.archive.org/web/20200526225039/https://blog.monstermuffin.org/fixing-esxi-6-5-hpe-g7-servers/

1 Like

Out of interest how much power does this beast draw and are your ears bleeding?

3 Likes

I’ve been looking at something like this, and man is AMD back or what? Just one Threadripper outperforms a quad setup of the older Xeons. Sadlife for Intel :confused:

3 Likes
  1. I’m expecting ~500W idle if I don’t reel them in with some proper power settings, so definitely working to curb that.
  2. Not yet. The OEM fans have been quite reserved with how much noise they make thus far. I also have some 3rd party fans waiting to be installed, for when the OEM fans become too much of a noise issue.

That’s definitely true. If only I could afford to buy a similar Threadripper setup :smiley: I’m going to be cannibalizing a lot of my current workstation to make this all work for under 700 USD. Used market has been a pain lately.

1 Like

Currently looking into how to make a custom ISO for the DL580 G7:

I’ll probably use the default ISO to (attempt to) apply BIOS/firmware updates, before replacing it with a custom one.

Also looking at possibly playing with some PWM fans in the future:

Okay, major development here. A few months ago, when I first decided to go with the DL580 G7, I ended up coming to the conclusion that I’d have to buy at least some of the parts (system boards, cache modules, etc.) myself, since sellers might not always include them out-of-the-box. That was why I ended up buying both the System I/O and PCIe Riser boards. Fast forward to two days ago, when I did the initial inspection of the server, and I saw that my DL580 already came with a System I/O board. I thought that I would just end up having a spare one, in case anything went wrong. But today, I found this:

This prompted me to look inside my server, since I had to see what mine came with. Mine appears to have come with the 0A revision. I looked in my spare parts/inventory crate, to see what my supposed spare had. Low and behold, I was blessed enough to have purchased a 0B revision, without even knowing it.

Another update incoming in a few…

Just had a power event of sorts a few minutes ago. Power for the entire house flicked 3-4 times. Good thing I didn’t go through with the BIOS flash I had planned for tonight. Might have to pony up for a UPS one of these days. Whereas my current workstation can take a power event like that just fine, the server might not. The PSU(s) for it are smaller, and might not come with the same protections that my current one does. I’ve had the lights in my room flicker and wane, and the T7500 took it like nothing happened. Something tells me that this is gonna delay me a few more months…

1 Like

Sorry if I sound skeptical, but a DL580 G7 (in 2020) for a home server does not sound like a good idea.

  • E7-8870’s are 9 year old … it’s true you get 40 cores but I wonder how much horse power they have in them compared to something like a ryzen 3950x, with those xeons, you get not only lower IPC but also lower clock speeds. Also the features are ancient, I am not sure if your workloads need something like AVX?
  • DDR3 RAM is half the speed of DDR4 when both operate at basic frequencies, and for this HP server I think you will run at base, so the four channel DDR3 will be actually significantly slower than the dual channels you get with a ryzen (I think upto 35% for a DDR4 @3600 Mhz). I just noticed you have 8 memory channels in the hp server.
  • Four CPU sockets means significantly high cache synchronization delays … This is not a big issue for servers where usually you have highly independent tasks running in parallel and you care more about throughput than responsiveness, but for a workstation, it’s usually not a good idea.
  • Hardware raid controllers are indeed a good fit for datacenter, but for a home user I would go with software raid just for the ease of recovery if the raid controller dies.

If I were you, I would keep most PCI cards and drives (the ones I can fit on x570 mb), sell everything else and use the money to boost my budget to a $1500 or so, that should be more than enough to get a decent 3950x build, given that I already have the cards and the drives, I would only need a cpu+cooler, x570 mb, memory kit and a power supply. I would bet it would perform better than the 8 year old HP server, and would probably last more.
It would also make sense to estimate the power bill, for 24/7 usage that HP monster is gonna increase your bill for sure. Also if you really need those 40 core and you will max them out, I promise you the fan noise will be unbearable, and after market fans will not change that too much, they are tiny 92mm fans.
Note: for your requirements, 64gb seems tight, you could get 2x 32gb dimms leaving space for future expansion upto 128gb.

2 Likes

I think that LSI SAS card is an HBA card not a RAID card. So it does not do hardware raid, only software raid.

2 Likes
  1. I understand you on this point. The IPC of Westmere in general is a bit low in 2020 when compared to Ryzen. I deal with it first-hand whenever I’m on my T7500 (daily), which also uses Westmere Xeons. The only difference in this case is higher clockspeeds keep things a bit more palatable for most tasks. I’ve also been pretty fortunate with most of my workloads being those that aren’t entirely impossible/impractical without use of AVX instructions.
  2. The server ended up coming with 64GB of DDR3 ECC out-the-box, so it’s going to end up with 128GB once I’m done :smiley: More luck on my part.
  3. ESXi can utilise NUMA to help curb much of the latency, and each CPU will have its own localised memory, due to my intended configuration. If I configure everything properly, this shouldn’t impact the user experience much:
  1. The LSI card is for an 8TB HDD array, not RAID. I understand the confusion though. No RAID setup to be seen in this iteration of the project, sadly.

I’d want to keep at least two of my GPUs, the sound card, the 10GbE NIC (for high-speed local transfers), and all of my storage drives if I ditched the current server. Really have been wanting to move to a VM server (and away from desktop environment) for over a year now. But price of DDR4 and a new mobo have been a thorn in my side for a while (ignoring the money I don’t have for the new CPU). If I could, I’d skip Ryzen 9 and grab a new Threadripper instead. Still need a big enough case to fit everything :thinking: Not sure if I could afford that right now, though. Might get stuck with another darned gamer’s workstation again if I go that route and run out of money midway. And gaming’s been on the back-burner for me as of late. I also own a decent amount of enterprise software licenses, and hope that Threadripper can run ESXi well. A nice idea, but one for a later year, when I’ve saved over 1k USD to do it right and be happy about the end result.

Just purchased some more SAS drives for the server, since one of the ones I had purchased originally don’t appear to work for some reason. Also waiting on some PCIe power cables, so I can test out my GPUs.

1 Like

You’d probably save a lot of money on electricity unless you pay a flat rate anyway, damn not to mention the heat!

It’s cool though, nice server!

1 Like

I probably would, if I could afford a newer server platform :frowning:

Could you point me in the direction of an AMD EPYC/Threadripper motherboard with the following properties:

  • Compatible with ESXi (no weird errors/issues, everything works as it should)
  • Has at least 3 PCIe x16 slots
  • Has at least 3 PCIe x8 slots
  • Has 8 SAS/SATA ports
  • Has dual BIOS (one for regular use, one for backup)

Just window shopping at this point, since I can’t actually make the purchase currently. It will be a few years before I pay off my college loans and have enough saved up to make this move. I’m currently in an agreement where I’ll be paying for electricity that I use.

Currently waiting on some HP-branded SAS drives for the server, since those have the potential to affect the acoustics in a positive manner (reduced sound output). Can’t wait to test them out when they arrive.

1 Like

Made some changes to the SAS HDD choices I’m using, due to compatibility and acousitcs reasons. While I could go and LLF the whacky NetApp drives I purchased, I’d still have to put up with a noisier server afterward. I’d rather move in a different direction, and restrict that issue to my decisions in PCIe cards instead. Also removed the old HITACHI HDD, since it didn’t really belong in this project. It’s SATA 1 or 2 iirc. Here are the items I kicked from the project:

  • (1x) 250GB HITACHI HTS542525K9SA00

  • (4x) 600GB HGST NetApp X422A-R5 SAS

Still looking to see if I can get the Dell mouse…

1 Like

Currently looking into making a custom ESXi 6.5 image for the DL580 G7, since official support was axed after 6.0. I already own the license, and I’d rather not waste it in laziness. It wouldn’t be the first time I had to do something like this. On a side note:

Just removed a Tesla K10 from the project. It’s been reduced to a spare component, for the sake of noise reduction and power concerns. Artix Linux is no longer in line to receive a GPU. MacOS will take over the [email protected] role. If you have any questions, feel free to ask.

1 Like

Once I buy this cable (to power the HBA disk array), the server project will be ready to go. I definitely should list the E7-2870’s, since I can’t use those with the server.

The interesting part is, it has molex and other endings on it to. Multipurpose…

1 Like