Most reliable used server OEM

I’ve been rocking an HP ProLiant DL360 G7 that I bought for pretty much nothing on Amazon (cost $150 and cashed out some Amazon cash back, ran me $70). This has been my daily driver since I need a lot of cores for cheap and this seemed like a good deal.

It sucks. The graphics keep crashing, two boots out of three the JBOD/RAID just doesn’t read any drives and now it just randomly hangs on boot (I understand that server boards take longer to POST but this just locks up indefinitely).

Which server OEM is the best to buy used on the second hand market? Is one brand any better than the others? Do the conditions the server was in for its life time a bigger factor than brand? I might just put things in the cloud because I don’t have the money or patience to fix hardware issues at this stage.

EDIT: Also, what is a good used server that supports DDR4? I’d probably upgrade from this to something else later, and a lot of what I’m doing is RAM-intense.

Honestly, I have had the most success with Dell. Especially when you can extend the paid support. I am not a fan of HP but our current shop uses HP exclusively and we have old stuff. I guess HP used to be better back in the Day but anything made since the twenty teens has been a crap shoot at best for me.

I’ve heard that HP servers will ramp fans to 100% if their OEM hard drives aren’t installed, I haven’t had this problem with dell servers

To the OP Definitely be wary of this. Not as bad with older pre-HPE hardware, as long as you don’t update the firmware.

G7 is really old, I can’t imagine running a business on antique hardware. A lot of cores mean nothing if they’re terribly inefficient, not to mention crashing.

I get that budgets are a thing, and up-front cost usually gets focus but try to keep TCO in mind. Spending hours troubleshooting, or having it crash while using it to earn money, almost always costs more than the pennies saved up front.

1 Like

I have both HP Servers (DL380 G5s) and Dell Servers (PE 2950 G3) plus a number of custom build mid-towers working as servers as well. While the HPs and Dells have been good for me in terms of reliability and features for the price I paid for them I debating between custom-built machines in the future (think server chassis with prosumer hardware inside) as I can get more modern equipment while staying on budget. Or going with older Supermicro equipment. The reason I would drop HE is their Service Packs and firmware updates are behind a paywall, as they want you to have an active support contract to get them. While Dell does not they still have some hardware that is proprietary or will only work with other Dell hardware, such as PCIe devices in some systems.

You have to go with either 3U or 4U to be able to fit modern cooling solutions and all the 1U or 2U form factor prosumer solutions don’t work (well) unless you go custom liquid cooling or buy a real niche AIO.

High-density servers are almost off the table which may or may not be a deal-breaker for some people.

Not to mention the EFI features that are desired for business environments are not usually present in consumer boards. IPMI, ILO, etc. This is in addition to hardware features like swappable power supplies.

Unable to edit my post to add some more context/information:

Right now I am running all 2U servers but will be migrating to 4U as I have lots of rack space and will allow for larger fans that can spin at a slower RPM, but yes getting 1U or 2U chassis with non-enterprise equipment is more of a challenge especially on a budget. As for server boards I have been focusing on ASROCK or Supermicro which are more enterprise than prosumer, I want something with built-in dual 10G and ideally enough PCIe slots to cover my use case, as well as IPMI, would be nice if possible. I want to move from QNAP to TrueNAS for storage and I use Proxmox for my hypervisor so I will be moving from hardware RAID to ZFS there too.

Hot-swappable power supplies are doable in a consumer system but are much more expensive to implement. I am looking at adding enough compute servers to allow for a failure of more than 1 compute server. ATM I can loose 2 complete servers and still keep the required systems running.

I am working on a solution for iDRAC/iLO using a raspberry pi and looking at a managed PDU or some other type of method to power a server on from cold. I also heard that ASROCK is making an iDRAC/iLO PCIe card possibly too.

I have not yet settled on a solution yet and keep going back between building or using old enterprise, though if I do go enterprise I would go Supermicro then Dell for OEM.

The biggest reason for wanting to change out my existing compute servers is the high power usage, no direct access to the disks, fan noise (minor as they are in a dedicated room), iDRAC/iLO is so old that the remote console does not work with modern browsers.

I increased your trust level to a standard member so now you should be able to edit posts.

1 Like

Yea I agree. Had I known this was an issue I would have never bought it. Electricity is very cheap where I live so power efficiency isn’t a concern, but the setup isn’t reliable. I assumed that server hardware had enough internal redundancy with things like capacitors that age wouldn’t be an issue.

I’m currently using this as a desktop so reliability isn’t too important, but the original plan was as a Proxmox box.

Well the planned use case for the next server is everybody remote desktops into a machine and does their programming work on there. Everybody gets all cores and threads assigned as vCPUs. The amount of time I spent waiting on compilers at previous places was insane and I don’t want anybody under myself to have to deal with that.

I’ll probably hold off until I can buy one of the 64 core AMD EPYC Rome chips (and a dual socket motherboard) and put that in a colocation.

Try adding in a non-OEM replacement fan or additional fan in and the system ramps the OEM fans to 100% 24/7 and complains that there is a fan missing. when you drill down, it complains that there is a fan present but it is not OEM and needs to be removed. Yeah. The whole HPE system is a paywall now. Like Apple, they put chips in their fans to communicate with the mainboard for “quaranteed functionality.” It i a standard server 4pin fan connector.

I have never had this issue with Dell. We also have to emergency patch our HPE systems that had SSDs due to a firmware bug that would periodically erase the SSDs, particularly if they were not HPE re-branded Intel SSDs.

I prefer Dell. Need drivers or a BIOS update for your 10 year old PowerEdge? No problem! Browse to support.dell.com and enter your tag number. HP are jerks and cut off your access to firmware updates when the warranty expires. They’re also the only brand to show me VMware’s purple screen of death. Multiple times. In production.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.