Rebuild around E5-2680 V4 or build something new?

Here is my situation, I have an ESXi server which is way overpowered, and I want to reduce power consumption and heat. Any advice welcome, but this thread is just going to be me ranting and trying to figure it out

Here are the specs

Chassis SuperChassis 826BA-R1K28WB
Backplane BPN-SAS2-826EL1
PSU 2 x PWS-1K28P-SQ
Fans 3 x FAN-0094L4 (San Ace 9G0812P1G09)
Motherboard Supermicro X10DRW-E
CPU 2 x Intel Xeon E5-2680 V4 (14C/28T each)
Memory 256GB DDR4 2400MHz (8 x 32GB)
CPU Heatsink 2 x Passive 2011 Heatsinks
HBA (TrueNAS) LSI 9207-8i (PCIe 3.0 8x)
NIC Broadcom 57810s (2 x 10G SFP+)
Boot Media HP 8GB Flash Drive (ESXi Boot)
Datastore Micron 9100 PRO 3.2TB PCIe NVMe SSD

It wasn’t too bad when I first got it, but the problem is that I added 12 x 8TB SAS disks and created a secondary NAS using the LSI 9207-8i passed through to a TrueNAS VM. It works great, but the added heat really takes its toll. I’m right on the edge of needing to up the fan speed some more, and that makes it a touch too loud. Frustratingly, I really don’t need that much CPU at all. As long as I have 128GB of RAM, I’m good. 64GB could work, but it would be a real stretch and I’d have to reconfigure some VM’s

I also can’t ditch a CPU, as I then lose half my PCIE slots! So now I’m trying to figure out the best way forward. What I really don’t want to do is buy more hardware, replace things, only to save almost nothing power/heat wise

Here are the options I am choosing between, and I’ll add more as I think of them

  • Buy 6 x 14TB Disks and make an array with the spare drive bays in my main TrueNAS to replace the 12 x 8TB Array. This saves me a lot of heat, and lets me drop a CPU. Cost $1200, doesn’t do anything with the stacks upon stacks of 8TB SAS disks I have. I don’t think this option make sense.

  • Buy a disk shelf and put the 12 x Drives into it, attach to main NAS. Cost $400~ maybe less. Problem here is how much overhead am I adding by having another disk shelf with more PSU’s, fans etc. This would free me up to do almost anything with the ESXi server

  • Buy a new WIO board for the Supermicro chassis in a lower powered platform. Limited board choices though, tough finding one with enough PCIE - Cost ??

  • Buy new chassis entirely, probably another 2u Supermicro so I can re-use parts from this one. New board, new CPU etc - Cost ??

  • I did think about going EPYC and re-using my 256GB DDR4 ECC, but its all 2400mhz. Can I use 2400MHz with EPYC?

Another idea is figured out a more efficient use of disk. I just threw 12 in there and made a RAIDZ2. I have 32TB used and 32TB available. If I could reconfigure that for less disk, that would help too

Could you just swap out the 2680v4s for some 2618lv4’s if you don’t need so much cpu? the LCC dies tend to idle and run with alot less power than the MCC/HCC dies, more than just core count would suggest.

Yes, EPYC can handle 2400MT/s, according to my knowledge.

Suggestion, this mainboard:
https://www.asrockrack.com/general/productdetail.asp?Model=EPYC3451D4U-2L2T2O8R#Specifications

EPYC 3000 series, the 3451 is 16c/32t (there’s an overview here), mATX form factor, butt-loads of connectivity. No idea on price, but a lower spec’ed board (3251 SoC) was USD1000 at Amazon yesterday.

HTH!

Perhaps, but they are pretty expensive. I’d be spending $250, and then I really hope I even see the difference

I’ve been bitten before by the L CPU’s having a lower TDP, only to really not make much of a difference

Its something to look at for sure, and it would be an easy change

Now that is a very interesting board… and it looks like I could use 128GB of my existing RAM

I’m going to look into this for sure

1 Like

Another idea I’ve had is to reduce my spindle count into something that can fit into my NAS alone. I have about 40TB of data in total, but I’m not sure I can figure out a way to make an array of that size out of 8TB disks in 12 bays without just going for another big RAIDZ2, which I’m not sure I want to do

Found a price on Newegg: 2k USD, before shipping :roll_eyes:

Yikes! :exploding_head:

Yeah that’s a real shame, sweet board!

Okay I have made some progress!

I ordered MCP-240-82608-0N which is a new rear window for the WIO chassis, with pretty much just turns it into a regular chassis which can take ATX boards

So now, I’m essentially half way there, I just need to choose a new board, CPU and memory

I just need 3 x PCIe slots each at least x4

Its tempting to find something that could re-use these CPU’s and only use one, and my memory. I’m not sure exactly how much power I’ll save there though

I don’t think I want to go for any middle ground. Its either re-use one of these CPU’s, or go all out on something pretty recent for even lower power

Now the descisions come.

X10SRM-TF Motherboard is $250, single 2011-3, on board 10G NIC (So I could dump my dual port SFP+ NIC), M.2 and three PCIE slots

That might be a good stop-gap until I get my hands on something newer.

I need to decide if $250 is a good enough deal for whatever power draw I’m left with. The problem with my setup is that I want/need 128GB of RAM. 128GB of Unbuffered ECC alone is nearing $700, not even including a board and CPU. So I’m looking at well over $1000

But, will this $250 pay for itself? That’s the million dollar question

I think ditching 128GB of RAM and a CPU should make a dent though

Alright, its decided

I’m going to ditch the E5-2680 V4’s and RAM, and grab a ASRock X470D4U, 128GB of DDR4 UB-ECC and maybe a 5900X. It looks like the 5900X on its own as it fast as my dual E5-2680 V4’s…

3700X is the winner, that’s what I’ll be going with

Keep in mind that Ryzen can be power tweaked via BIOS quite considerably. Vermeer has ECO mode, but Matisse should still have all the tools available to ramp down power draw. With my Vermeer (5900x@65W TDP) I get 45% less power (and around -25°C temps) for like 10% performance give or take. And your typical server use case wants cores, not insane clocks.

I just check my system

Daul E5 2680 V3 total 24 core 48 thread
64gb ecc.
It has an idle draw 120watts see picture below.

My unraid server
Intel 10400
32gb ram (non ecc)

Has an idle draw 26watts.
I had no idea their would that much of a difference. See picture below

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.