Project Personal Datacentre

Hello! I’m a bit new here. If I’ve posted to the wrong subforum, please let me know. Sorry if the formatting isn’t quite right. This is the build log for long-running server project, which started back in mid-2018. The server project is meant to replace (and exceed) my previous workstation - a Dell Precision T7500. Here are the hardware specs for the server:

HPE ProLiant DL580 G7

    OS   :: VMware ESXi 6.5u3 Enterprise Plus
    CPU  :: 4x Intel Xeon E7-8870's (10c/20t each; 40c/80t total)
    RAM  :: 256GB (64x4GB) PC3-10600R DDR3-1333 ECC
    PCIe :: 1x HP 512843-001/591196-001 System I/O board + 
                1x HP 588137-B21; 591205-001/591204-001 PCIe Riser board
    GPU  :: 1x nVIDIA GeForce GTX Titan Xp +
                1x AMD FirePro S9300 x2 (2x "AMD Radeon Fury X's")
    SFX  :: 1x Creative Sound Blaster Audigy Rx
    NIC  :: 1x HPE NC524SFP (489892-B21) +
                2x Silicom PE310G4SPI9L-XR-CX3's
    STR  :: 1x HP Smart Array P410i Controller (integrated) +
                1x HGST HUSMM8040ASS200 MLC 400GB SSD (ESXi, vCenter Appliance, ISOs) + 
                4x HP 507127-B21 300GB HDDs (ESXi guest datastores) +
                1x Western Digital WD Blue 3D NAND 500GB SSD + 
                1x Intel 320 Series SSDSA2CW600G3 600GB SSD +
                1x Seagate Video ST500VT003 500GB HDD
    STR  :: 1x LSI SAS 9201-16e HBA SAS card +
                1x Mini-SAS SFF-8088 cable + 
                        1x Dell EMC KTN-STL3 (15x 3.5in HDD enclosure) + 
                                4x HITACHI Ultrastar HUH728080AL4205 8TB HDDs +
                                4x IBM Storewise XIV v7000 98Y3241 4TB HDDs
    I/O  :: 1x Inateck KU8212 (USB 3.2) +
                1x Logitech K845 (Cherry MX Blue) +
                1x Dell MS819 Wired Mouse
            1x Sonnet Allegro USB3-PRO-4P10-E (USB 3.X) +
                1x LG WH16NS40 BD-RE ODD
    PRP  :: 1x Samsung ViewFinity S70A UHD 32" (S32A700)
            1x Sony Optiarc BluRay drive
    PSU  :: 4x HP 1200W PSUs (441830-001/438203-001)

The details for the ProLiant DL380 Gen9 will appear here once data migration is complete.

The planned software configuration has been moved back to the LTT post, and will be changing often for the foreseeable future.

Product links and details can be found here.

ESXi itself is usually run from a USB thumb drive, but I have a drive dedicated to it. No harm done. A small amount of thin provisioning/overbooking (RAM only) won’t hurt. macOS and Linux would have gotten a Radeon/FirePro (ie., Rx Vega 64), for best compatibility and stability, but market forces originally prevented this. Windows 10 gets the Audigy Rx and a Titan Xp. The macOS and Linux VMs get whatever audio the Titan Z FirePro S9300 x2 can provide. The whole purpose of Nextcloud is to phase out the use of Google Drive/Photos, iCloud, Box.com, and other externally-hosted cloud services (Mega can stay though). The reason for why I’m doing this.

Project mirror(s):

5 Likes

Sounds nice, I’ve never seen a DL580 in the flesh, biggest I’ve seen is DL380’s in a SAS multipath failover cluster.

https://web.archive.org/web/20200526225039/https://blog.monstermuffin.org/fixing-esxi-6-5-hpe-g7-servers/

1 Like

Out of interest how much power does this beast draw and are your ears bleeding?

7 Likes

I’ve been looking at something like this, and man is AMD back or what? Just one Threadripper outperforms a quad setup of the older Xeons. Sadlife for Intel :confused:

3 Likes
  1. I’m expecting ~500W idle if I don’t reel them in with some proper power settings, so definitely working to curb that.
  2. Not yet. The OEM fans have been quite reserved with how much noise they make thus far. I also have some 3rd party fans waiting to be installed, for when the OEM fans become too much of a noise issue.

That’s definitely true. If only I could afford to buy a similar Threadripper setup :smiley: I’m going to be cannibalizing a lot of my current workstation to make this all work for under 700 USD. Used market has been a pain lately.

1 Like

Currently looking into how to make a custom ISO for the DL580 G7:

I’ll probably use the default ISO to (attempt to) apply BIOS/firmware updates, before replacing it with a custom one.

Also looking at possibly playing with some PWM fans in the future:

Okay, major development here. A few months ago, when I first decided to go with the DL580 G7, I ended up coming to the conclusion that I’d have to buy at least some of the parts (system boards, cache modules, etc.) myself, since sellers might not always include them out-of-the-box. That was why I ended up buying both the System I/O and PCIe Riser boards. Fast forward to two days ago, when I did the initial inspection of the server, and I saw that my DL580 already came with a System I/O board. I thought that I would just end up having a spare one, in case anything went wrong. But today, I found this:

This prompted me to look inside my server, since I had to see what mine came with. Mine appears to have come with the 0A revision. I looked in my spare parts/inventory crate, to see what my supposed spare had. Low and behold, I was blessed enough to have purchased a 0B revision, without even knowing it.

Another update incoming in a few…

Just had a power event of sorts a few minutes ago. Power for the entire house flicked 3-4 times. Good thing I didn’t go through with the BIOS flash I had planned for tonight. Might have to pony up for a UPS one of these days. Whereas my current workstation can take a power event like that just fine, the server might not. The PSU(s) for it are smaller, and might not come with the same protections that my current one does. I’ve had the lights in my room flicker and wane, and the T7500 took it like nothing happened. Something tells me that this is gonna delay me a few more months…

1 Like

Sorry if I sound skeptical, but a DL580 G7 (in 2020) for a home server does not sound like a good idea.

  • E7-8870’s are 9 year old … it’s true you get 40 cores but I wonder how much horse power they have in them compared to something like a ryzen 3950x, with those xeons, you get not only lower IPC but also lower clock speeds. Also the features are ancient, I am not sure if your workloads need something like AVX?
  • DDR3 RAM is half the speed of DDR4 when both operate at basic frequencies, and for this HP server I think you will run at base, so the four channel DDR3 will be actually significantly slower than the dual channels you get with a ryzen (I think upto 35% for a DDR4 @3600 Mhz). I just noticed you have 8 memory channels in the hp server.
  • Four CPU sockets means significantly high cache synchronization delays … This is not a big issue for servers where usually you have highly independent tasks running in parallel and you care more about throughput than responsiveness, but for a workstation, it’s usually not a good idea.
  • Hardware raid controllers are indeed a good fit for datacenter, but for a home user I would go with software raid just for the ease of recovery if the raid controller dies.

If I were you, I would keep most PCI cards and drives (the ones I can fit on x570 mb), sell everything else and use the money to boost my budget to a $1500 or so, that should be more than enough to get a decent 3950x build, given that I already have the cards and the drives, I would only need a cpu+cooler, x570 mb, memory kit and a power supply. I would bet it would perform better than the 8 year old HP server, and would probably last more.
It would also make sense to estimate the power bill, for 24/7 usage that HP monster is gonna increase your bill for sure. Also if you really need those 40 core and you will max them out, I promise you the fan noise will be unbearable, and after market fans will not change that too much, they are tiny 92mm fans.
Note: for your requirements, 64gb seems tight, you could get 2x 32gb dimms leaving space for future expansion upto 128gb.

3 Likes

I think that LSI SAS card is an HBA card not a RAID card. So it does not do hardware raid, only software raid.

2 Likes
  1. I understand you on this point. The IPC of Westmere in general is a bit low in 2020 when compared to Ryzen. I deal with it first-hand whenever I’m on my T7500 (daily), which also uses Westmere Xeons. The only difference in this case is higher clockspeeds keep things a bit more palatable for most tasks. I’ve also been pretty fortunate with most of my workloads being those that aren’t entirely impossible/impractical without use of AVX instructions.
  2. The server ended up coming with 64GB of DDR3 ECC out-the-box, so it’s going to end up with 128GB once I’m done :smiley: More luck on my part.
  3. ESXi can utilise NUMA to help curb much of the latency, and each CPU will have its own localised memory, due to my intended configuration. If I configure everything properly, this shouldn’t impact the user experience much:
  1. The LSI card is for an 8TB HDD array, not RAID. I understand the confusion though. No RAID setup to be seen in this iteration of the project, sadly.

I’d want to keep at least two of my GPUs, the sound card, the 10GbE NIC (for high-speed local transfers), and all of my storage drives if I ditched the current server. Really have been wanting to move to a VM server (and away from desktop environment) for over a year now. But price of DDR4 and a new mobo have been a thorn in my side for a while (ignoring the money I don’t have for the new CPU). If I could, I’d skip Ryzen 9 and grab a new Threadripper instead. Still need a big enough case to fit everything :thinking: Not sure if I could afford that right now, though. Might get stuck with another darned gamer’s workstation again if I go that route and run out of money midway. And gaming’s been on the back-burner for me as of late. I also own a decent amount of enterprise software licenses, and hope that Threadripper can run ESXi well. A nice idea, but one for a later year, when I’ve saved over 1k USD to do it right and be happy about the end result.

Just purchased some more SAS drives for the server, since one of the ones I had purchased originally don’t appear to work for some reason. Also waiting on some PCIe power cables, so I can test out my GPUs.

1 Like

You’d probably save a lot of money on electricity unless you pay a flat rate anyway, damn not to mention the heat!

It’s cool though, nice server!

1 Like

I probably would, if I could afford a newer server platform :frowning:

Could you point me in the direction of an AMD EPYC/Threadripper motherboard with the following properties:

  • Compatible with ESXi (no weird errors/issues, everything works as it should)
  • Has at least 3 PCIe x16 slots
  • Has at least 3 PCIe x8 slots
  • Has 8 SAS/SATA ports
  • Has dual BIOS (one for regular use, one for backup)

Just window shopping at this point, since I can’t actually make the purchase currently. It will be a few years before I pay off my college loans and have enough saved up to make this move. I’m currently in an agreement where I’ll be paying for electricity that I use.

Currently waiting on some HP-branded SAS drives for the server, since those have the potential to affect the acoustics in a positive manner (reduced sound output). Can’t wait to test them out when they arrive.

1 Like

Made some changes to the SAS HDD choices I’m using, due to compatibility and acousitcs reasons. While I could go and LLF the whacky NetApp drives I purchased, I’d still have to put up with a noisier server afterward. I’d rather move in a different direction, and restrict that issue to my decisions in PCIe cards instead. Also removed the old HITACHI HDD, since it didn’t really belong in this project. It’s SATA 1 or 2 iirc. Here are the items I kicked from the project:

  • (1x) 250GB HITACHI HTS542525K9SA00

  • (4x) 600GB HGST NetApp X422A-R5 SAS

Still looking to see if I can get the Dell mouse…

1 Like

Currently looking into making a custom ESXi 6.5 image for the DL580 G7, since official support was axed after 6.0. I already own the license, and I’d rather not waste it in laziness. It wouldn’t be the first time I had to do something like this. On a side note:

Just removed a Tesla K10 from the project. It’s been reduced to a spare component, for the sake of noise reduction and power concerns. Artix Linux is no longer in line to receive a GPU. MacOS will take over the F@H role. If you have any questions, feel free to ask.

1 Like

Once I buy this cable (to power the HBA disk array), the server project will be ready to go. I definitely should list the E7-2870’s, since I can’t use those with the server.

The interesting part is, it has molex and other endings on it to. Multipurpose…

1 Like