2025 Reusing Servers and Workstations: Wiki

I now realize why no one has done a broad enterprise hardware comparison across generations.
I compared 1600 CPU’s to land here… 1 chart to rule them all

Long story short; If it has DDR3: Do not buy it in 2025, but DDR4 does not make you safe from the electric company.

If your goal is to build the baddest firebreather of a server, lookup cpu benchmarks and work down from the top until you find one you can afford.

For the rest of us that actually pay for electricity: Performance per watt TDP is our golden ratio

donuts per dollars, or how fast per watt is our tangible driving factor

There are tiers to this: you probably don’t want a 300 watt TDP CPU in your NAS or a 40 watt TDP CPU in your virtualization server.

Onto the chart:

This charts the peak performers per generation, but saving y’all the math: on average each CPU generation is 50% less efficient than the best performer listed here.

The exception being the old DDR3 Intel Low power L sku’s from Skylake v5 and older where per generation efficiency average 80% worse than the L skus.

For virtualization servers:

AMD EPYC is more efficient than Intel overall and even more so per generation, but Intel had market inertia so Intel kept outselling AMD while offering a worse product for years.

As of 2025, Xeon Scalables are cheaper per (performance per watt) than EPYC.

You’ll have to run an EPYC or Xeon full tilt for YEARS to justify the current price delta on the used market, but all else being equal: buy EPYC Rome or newer.

*2nd Gen Xeon SkyLake Scalables are the sweet spot in the used market as of right now.

Optane DCPMM / Micron NVDIMM persistent memory support on Xeon scalable 2nd and 3rd gen is a welcome feature for virtualization servers.

For NAS Servers:

The Intel Xeon Coffee Lake can be had for ridiculously cheap with 10th gen i5 levels of performance which is perfect for a NAS and lite virtualization loads.

For AMD fanboys, it’s the EPYC 3001 Snowy Owl series.

As above, the Intel counterparts outsold AMD but this time it was justified.

*There may be 1 omission at the very top of this chart, I purchased the CPU personally and am working on validating it’s place.

There will be videos and build logs elaborating everything above including several builds following the recommendations, hardware sourcing, and troubleshooting used enterprise gear.

18 Likes

Part 2: Picking and sourcing your hardware

stay flexible and be ready to substitute

Here’s the first crossroad:

Build your own versus OEM or prebuilt.

I choose both, harvesting OEM for my builds.

OEM servers come in 2 variants: appliances or general purpose

Appliances from ArcServe, NetApp, Oracle, SmallTree, etc. are made by the bigger OEM’s and rebranded with some special sauce.

These are the cheapest way to get hardware as there will be firmware locks, firmware incompatibilities, little to no documentation, and general mayhem. You can get known good configurations for less than individual components.

These will typically be 1u or 2u servers and very loud.

The cheapest usable system is an OEM general purpose servers or workstations from any of the major players:

AsRock Rack, Dell, Gigabyte, HPE, Lenovo, Supermicro, that was alphabetical… not the order to look for.

Problems arise when you go to deploy or more specifically fix an oem server or workstation.

Dell, HP Enterprise, and Lenovo use proprietary hardware including power supplies, motherboards, down to the small bits like TPM modules which are required for modern Windows deployments.

More on that later.

Asrock Rack, Gigabyte, and SuperMicro are OEM’s but also motherboard manufacturers for custom builds. Again, alphabetical, not order to look for.

These OEM’s are typically more expensive used, but if you can find a prebuilt system using an off the shelf motherboard it’s a serious shortcut and you may be able to swap the board into a taller case for quieter cooling and be done very easily.

Prebuilt systems save a ton of time troubleshooting compatibility. Even if you build your own, a running system is a good baseline to revert to in worst case. If you’re into cars, it’s like a donor car versus piecing it together.

Lacking enterprise gear experience?

I’d strongly recommend buying a complete OEM server or workstation. The amount of RAM is not a critical factor at time of purchase, but having enough RAM to boot and test everything is all you need.

You can then harvest what you need for your build.

Form factor:

Servers and workstations come in all shapes and sizes from 1u to 6u with workstations typically being rack mountable 4u and 6u. The general rule is larger servers are quieter, unless we are dealing with high density blade enclosures which are the loudest of all.

Servers also come in half or full depth, though half depth is typically reserved for OEM networking appliances or aftermarket builds.

You can retrofit some OEM servers with quieter fans, but that may introduce hardware warnings when the fan speed is not what the manufacturers recommend. This can cause boot issues. Additionally, OEM’s are notorious for using proprietary fan headers so splicing is required.

1u servers are the least desirable and the cheapest on the used market.

A room with sufficient noise isolation to quiet a 1u server is not common.

I prefer half depth 4u Chassis when rack mounting or need hot swap drive bays.

Alternatively, large gaming cases like the 6000D airflow can fit EATX boards so you can have a full server in a consumer case.

Where you buy:

Second hand resellers often test components prior to selling, which helps when purchasing individual components.

Complete systems are better sourced from auction sites.

Auctions are my go-to, but it is a gamble with risks and rewards.

eBay is the next safest on the list, but that all comes down to the seller

I’ve won and lost big buying from eBay while hoping my due diligence is enough to protect me.

Last and most dangerous, but potentially most profitable is surplus auctions.

Most surplus servers go to auction missing RAM, sometimes CPU’s (with the coolers installed), and always with the drives removed.

You cannot test them and have to bid accordingly.

Additionally, surplus sales are not done by IT professionals so you can expect errors in description.

I’ve purchased a stack of servers pictured complete only to pick up barebones lacking power supplies, drive caddies, RAM, CPU, and even the CMOS battery.

TPM and Windows:

TPM is the Trusted Platform Module and for many OEM servers / workstations completely impossible to acquire.

If you want to virtualize Windows 11 24H2 or newer, you’ll need a TPM module and passthrough or be left behind with hacked and EOL versions of Windows. Not a great place to be. This was well intended by Microsoft, but side stepped in the past due to TPM availability constraints.

Windows Server 2022 and 2024 require TPM 2.0 modules for security and cryptographic roles to be enabled and functioning.

Boards from SuperMicro, Gigabyte, and Asrock Rack can all have TPM 2.0 modules added and are readily available back to Intel Xeon Scalable and EPYC 7000 series. The modules range from $40-80 USD but are necessary if you need Windows in 2025.

5 Likes

Reserved for Troubleshooting hardware

1 Like

Reserved for using the hardware

1 Like

reserved for future graphs and breakdowns

1 Like

Out of curiosity: What test was run in order to come up with the mark numbers? I’d like to run it against the machines in my fleet to start making choices about what to do with them. :slightly_smiling_face:

1 Like

I pulled from cpubenchmark.net’s passmark database and testing suite:
PassMark PerformanceTest - PC benchmark software

Having ran this same benchmark on literally hundreds of machines (then wiping and deploying clean installs), I trust it to give me a standardized CPU benchmark as most clean installs and builds net results within margin of error of the passmark published results.

is a product of the error margin published being high, and zero coverage of this CPU. Realized yesterday this decision cost $2k+, but if it’s truly the efficiency monster I suspect it to be:

Worth it

1 Like

What if your application needs gobs of RAM? What’s the best value for 384 to 512 GB? Value would be memory bandwidth per dollar (for platform plus RAM) with 384 to 512 GB of memory. CPUs that support AVX2 or better only.

So that depends…
If you need ultra fast RAM which only became available with the last 2 generations from both AMD and Intel, the EPYC 9004 is the way to go.

Caveat:
EPYC runs best with 1 DIMM per memory channel.
That’s 12 DIMMS, which dictates your loadout in multiples of 192 GB.
192 GB
384 GB
768 GB
up to 3072GB at 4800 MT/s for 9004, 6000 MT/s for 9005

I’ve ran regressions and EPYC loses up to 30% performance with any configuration other than 1 DIMM per memory channel is used.

Second Gen Scalables and EPYC 7002 use DDR4 2933 MT/s and mine is running 384 GB of RAM between 2 CPU’s for 1/10th the cost of my last 9004 deployment.

EPYC 7003 is the sweet spot between the whole package and massive RAM loadouts.

DDR4 2933 MT/s is sub $1 / GB
DDR4 3200 MT/s is sub $2 / GB
DDR5 4800 MT/s is $4 per GB
DDR5 5600 MT/s is $5+ per GB
And that’s not even the super fast stuff.
So, it’s all relative.

3 Likes

2933 MT at $1/GB looks like the best value then. Thanks!

1 Like

Chart updated to add Comet Lake OEM, Comet Lake, and Rocket Lake

unsure how those fell off during the review

much to no one’s surprise: nothing changes

Intel’s Comet Lake OEM sku is the only outlier, but unobtainable and the same price used as a new EPYC 4004

I am very happy to see that you have a NO category. I agree with your reasoning.

I got a genoa 9124 so that I could plug in all of the PCIe devices I want without having to buy multiple computers to accomplish that task. I also want to have enough ram, and with the epyc 9xxx I can buy some ram now, and upgrade it later. None of my tasks use more than 3 CPU cores so far, and are not CPU speed limited.

What do you think about the 9015? I am not going to upgrade any time soon.

1 Like

Thanks for compiling this. Could you post a link to a CSV version of this too?

The Comet Lake is a standout for performance / w. Do you know how any of these idle?

1 Like

TDP (Thermal Dissipation Power) is a rating for heat output, not electrical consumption.

Do you also have the electrical power consumption figures? It would also be interesting single core versus multi core.

1 Like

you are absolutely correct, but that is not how the industry uses or derives TDP.
TDP is used as the design envelope for the CPU package and since a CPU’s TDP is a closed system (all energy entering the CPU is considered converted to heat) with the output being logical operations performed plus heat.

It’s supposed to be the wattage a CPU converts into logical operations at peak load with heat as a byproduct.

I measure that against the benchmark scores to give an efficiency at par value.

You’ll immediately say, but the par is different per CPU and be correct.
What we end up with is tiers of performance versus efficiency of that logic operation against thermal output via power consumption due to the way TDP is derived to begin with. BUT the efficiency yardstick is equivalent across all those tested.

To chart properly, it is 3 dimensional.

Net output is single dimensional and perfect since you don’t want a 400 watt EPYC for your media center NAS next to the TV (unless you do), or a 65 watt EPYC on AM5 as your VDI server in a medium sized enterprise (this actually, is a bad idea…)

in summation, I flattened a 3 dimensional problem into a single figure…

is answered by TDP as the way TDP is derived and in conjunction with the benchmarks we render peak efficiency as the electrical load is derived from the TDP.

THE standout for performance per watt is actually Raphael EPYC on AM5
it’s the obfuscate top line.

But it was so far above everything else, (50% above Comet Lake) I bought that model (4464P) and am in the process of validating it as I type this.

yeah, the arbitrary category was the only way I could convey the net result of this research.

I was genuinely shocked by the results.

as mentioned in the original post, each CPU family has on average 1 performer that is about 50% above the rest of the lineup. This works in our favor since lower tier CPU’s from a family tend to be more efficient than the top performer, which isn’t necessarily reflected accurately by used prices.

Enterprises extending life cycles will upgrade to the top performer a socket can support which pushes demand to the top performers.

Ironically, the most efficient performers will have their values plummet on the used market.

Since we are measuring peak efficiency under load, you are getting maximum productivity per watt converted into heat.

I’d say go after the most efficient offering when it fits your budget, but calculation of initial cost versus efficiency is a compound equation that requires load calculation.

i’d have to remove alot of cursing from the comments section, but yes. I would like to publish the complete chart.

1 Like

The assumption here is TDP as a representation of the max power drawn, which at least for consumer processors is not the case at all. I understand that its useful to make this comparison more feasible though.

The other assumption is that if the CPU is not taxed to its maximum (e.g, a fully single-threaded operation or lighter loads) that the performance per watt at max also translates to performance per watt at lower loads. I’m also not sure how accurate this is.

Since you have the processor, could you also do benchmarks for single-core performance and also check how much power it uses at idle? Look forward to the results.

Which processor is it for each CPU generation? Is it already in the table and am I missing it? This would be really useful to know.

I don’t think anyone minds :d

Where are you finding this?

64GB DIMMs are gonna be more expensive since that was the largest DIMM widely available for DDR4 ECC RDIMMs

Here’s $1.1 with no quantity discount

Just something to note- Optane PMEM is 2666 with Scalable Gen 2 so I believe that limits you? More of a question I am not positive on this.

1 Like

I am currently having an eye on Epyc 7773x combinationes like these for an AI rig.

  1. 2x AMD Epyc 7773X
  2. 1x Gigabyte MZ72-HB2
  3. 3x Nvidia Tesla P40
  4. 2x 512GB DDR4

Should be able to run two full models on CPU and up to three condenced models on each of the GPUs. Plus two times 512GB of main memory. Does not look bad for about 5500€ of investment.

1 Like