Hello! I’m a bit new here. If I’ve posted to the wrong subforum, please let me know. Sorry if the formatting isn’t quite right. This is the build log for long-running server project, which started back in mid-2018. The server project is meant to replace (and exceed) my previous workstation - a Dell Precision T7500. Here are the hardware specs for the server:
ESXi itself is usually run from a USB thumb drive, but I have a drive dedicated to it. No harm done. A small amount of thin provisioning/overbooking (RAM only) won’t hurt. macOS and Linux would have gotten a Radeon/FirePro (ie., Rx Vega 64), for best compatibility and stability, but market forces originally prevented this. Windows 10 gets the Audigy Rx and a Titan Xp. The macOS and Linux VMs get whatever audio the Titan Z FirePro S9300 x2 can provide. The whole purpose of Nextcloud is to phase out the use of Google Drive/Photos, iCloud, Box.com, and other externally-hosted cloud services (Mega can stay though). The reason for why I’m doing this.
I’ve been looking at something like this, and man is AMD back or what? Just one Threadripper outperforms a quad setup of the older Xeons. Sadlife for Intel
I’m expecting ~500W idle if I don’t reel them in with some proper power settings, so definitely working to curb that.
Not yet. The OEM fans have been quite reserved with how much noise they make thus far. I also have some 3rd party fans waiting to be installed, for when the OEM fans become too much of a noise issue.
That’s definitely true. If only I could afford to buy a similar Threadripper setup I’m going to be cannibalizing a lot of my current workstation to make this all work for under 700 USD. Used market has been a pain lately.
Okay, major development here. A few months ago, when I first decided to go with the DL580 G7, I ended up coming to the conclusion that I’d have to buy at least some of the parts (system boards, cache modules, etc.) myself, since sellers might not always include them out-of-the-box. That was why I ended up buying both the System I/O and PCIe Riser boards. Fast forward to two days ago, when I did the initial inspection of the server, and I saw that my DL580 already came with a System I/O board. I thought that I would just end up having a spare one, in case anything went wrong. But today, I found this:
This prompted me to look inside my server, since I had to see what mine came with. Mine appears to have come with the 0A revision. I looked in my spare parts/inventory crate, to see what my supposed spare had. Low and behold, I was blessed enough to have purchased a 0B revision, without even knowing it.
Just had a power event of sorts a few minutes ago. Power for the entire house flicked 3-4 times. Good thing I didn’t go through with the BIOS flash I had planned for tonight. Might have to pony up for a UPS one of these days. Whereas my current workstation can take a power event like that just fine, the server might not. The PSU(s) for it are smaller, and might not come with the same protections that my current one does. I’ve had the lights in my room flicker and wane, and the T7500 took it like nothing happened. Something tells me that this is gonna delay me a few more months…
Sorry if I sound skeptical, but a DL580 G7 (in 2020) for a home server does not sound like a good idea.
E7-8870’s are 9 year old … it’s true you get 40 cores but I wonder how much horse power they have in them compared to something like a ryzen 3950x, with those xeons, you get not only lower IPC but also lower clock speeds. Also the features are ancient, I am not sure if your workloads need something like AVX?
DDR3 RAM is half the speed of DDR4 when both operate at basic frequencies, and for this HP server I think you will run at base, so the four channel DDR3 will be actually significantly slower than the dual channels you get with a ryzen (I think upto 35% for a DDR4 @3600 Mhz). I just noticed you have 8 memory channels in the hp server.
Four CPU sockets means significantly high cache synchronization delays … This is not a big issue for servers where usually you have highly independent tasks running in parallel and you care more about throughput than responsiveness, but for a workstation, it’s usually not a good idea.
Hardware raid controllers are indeed a good fit for datacenter, but for a home user I would go with software raid just for the ease of recovery if the raid controller dies.
If I were you, I would keep most PCI cards and drives (the ones I can fit on x570 mb), sell everything else and use the money to boost my budget to a $1500 or so, that should be more than enough to get a decent 3950x build, given that I already have the cards and the drives, I would only need a cpu+cooler, x570 mb, memory kit and a power supply. I would bet it would perform better than the 8 year old HP server, and would probably last more.
It would also make sense to estimate the power bill, for 24/7 usage that HP monster is gonna increase your bill for sure. Also if you really need those 40 core and you will max them out, I promise you the fan noise will be unbearable, and after market fans will not change that too much, they are tiny 92mm fans.
Note: for your requirements, 64gb seems tight, you could get 2x 32gb dimms leaving space for future expansion upto 128gb.
I understand you on this point. The IPC of Westmere in general is a bit low in 2020 when compared to Ryzen. I deal with it first-hand whenever I’m on my T7500 (daily), which also uses Westmere Xeons. The only difference in this case is higher clockspeeds keep things a bit more palatable for most tasks. I’ve also been pretty fortunate with most of my workloads being those that aren’t entirely impossible/impractical without use of AVX instructions.
The server ended up coming with 64GB of DDR3 ECC out-the-box, so it’s going to end up with 128GB once I’m done More luck on my part.
ESXi can utilise NUMA to help curb much of the latency, and each CPU will have its own localised memory, due to my intended configuration. If I configure everything properly, this shouldn’t impact the user experience much:
The LSI card is for an 8TB HDD array, not RAID. I understand the confusion though. No RAID setup to be seen in this iteration of the project, sadly.
I’d want to keep at least two of my GPUs, the sound card, the 10GbE NIC (for high-speed local transfers), and all of my storage drives if I ditched the current server. Really have been wanting to move to a VM server (and away from desktop environment) for over a year now. But price of DDR4 and a new mobo have been a thorn in my side for a while (ignoring the money I don’t have for the new CPU). If I could, I’d skip Ryzen 9 and grab a new Threadripper instead. Still need a big enough case to fit everything Not sure if I could afford that right now, though. Might get stuck with another darned gamer’s workstation again if I go that route and run out of money midway. And gaming’s been on the back-burner for me as of late. I also own a decent amount of enterprise software licenses, and hope that Threadripper can run ESXi well. A nice idea, but one for a later year, when I’ve saved over 1k USD to do it right and be happy about the end result.
Just purchased some more SAS drives for the server, since one of the ones I had purchased originally don’t appear to work for some reason. Also waiting on some PCIe power cables, so I can test out my GPUs.
I probably would, if I could afford a newer server platform
Could you point me in the direction of an AMD EPYC/Threadripper motherboard with the following properties:
Compatible with ESXi (no weird errors/issues, everything works as it should)
Has at least 3 PCIe x16 slots
Has at least 3 PCIe x8 slots
Has 8 SAS/SATA ports
Has dual BIOS (one for regular use, one for backup)
Just window shopping at this point, since I can’t actually make the purchase currently. It will be a few years before I pay off my college loans and have enough saved up to make this move. I’m currently in an agreement where I’ll be paying for electricity that I use.
Currently waiting on some HP-branded SAS drives for the server, since those have the potential to affect the acoustics in a positive manner (reduced sound output). Can’t wait to test them out when they arrive.
Made some changes to the SAS HDD choices I’m using, due to compatibility and acousitcs reasons. While I could go and LLF the whacky NetApp drives I purchased, I’d still have to put up with a noisier server afterward. I’d rather move in a different direction, and restrict that issue to my decisions in PCIe cards instead. Also removed the old HITACHI HDD, since it didn’t really belong in this project. It’s SATA 1 or 2 iirc. Here are the items I kicked from the project:
Currently looking into making a custom ESXi 6.5 image for the DL580 G7, since official support was axed after 6.0. I already own the license, and I’d rather not waste it in laziness. It wouldn’t be the first time I had to do something like this. On a side note:
Just removed a Tesla K10 from the project. It’s been reduced to a spare component, for the sake of noise reduction and power concerns. Artix Linux is no longer in line to receive a GPU. MacOS will take over the F@H role. If you have any questions, feel free to ask.
Once I buy this cable (to power the HBA disk array), the server project will be ready to go. I definitely should list the E7-2870’s, since I can’t use those with the server.