Sizing Server PSU

Hi guys, I’m trying to size a server PSU and I’m stuck. I went through dozens of PSU calculators and none of them are up to date with the latest Sapphire Rapids Xeons. YouTube has also been of no help either. Although on YouTube at least I learned that TDP figures aren’t to be used in PSU calculations.

I’m not sure whether I’ll be going with consumer vs server PSUs at this stage given that I’m having a tough time finding a server chassis that suits my needs. I’m slowly giving up on rack mountable chassis and leaning towards the Fractal 7XL case which doesn’t suit my needs 100% either. I want to be able to have 24 HDDs in one case.

Anyway, whether I go for consumer or enterprise PSUs is irrelevant, I can’t figure out how much power I need. The CPU is a big unknown to me. Any clues?

Read Thermal design power - Wikipedia

Power rating is generally calculated as 150% of TDP. So a 300W TDP CPU requires 450W from the PSU.

I’d suggest getting an ATX 3.0 PSU, SPR can have very high transient spikes that the ATX 3.0 spec acocunts for better than an ATX 2.0 PSU.
A super rough estimate for PSU sizing would be 300% of total estimated component power usage (based off total part TDP) if going with an ATX 2.0 PSU and perhaps ~175% if going with an ATX 3.0 PSU.

I was in the same boat as you with cases, I couldn’t find anything reasonable that housed enough drives.
Regarding 24HDDs, unless you have a raid or hba card that can do staggered drive spin up, there will be many PSUs (even in the >1000w class) that don’t have sufficient amperage on the 5v rail to start all those drives up reliably.

1 Like

Another important thing to remember is that just because your TDP = X this doesn’t mean that an extra 100 watts or so are so over that will be enough. A CPU isn’t the only thing that will draw power. Do you plan on using peripherals? Are you using mechanical hard drives or SSDs, or both? Exactly what else do you plan on plugging into that server and do you know for a fact that no one else will be adding anything to it?

Personally, I’m not running a server in the purest sense of the term, but these days even a home networking PC will serve as a server. Currently I’m running 34 drives on my work station/home network/“server” PC (my own personal abomination). I’m using an HX1000i Corsair Platinum PSU on a CyberPower 1000 Watt Sine Wave UPS and it seems to run alright. Mind, I’m also using a lot of SSDs and that helps with amperage. I’m also running 4 external drives for backups and since they provide their own power this also helps.

According to ASUS I’m using a server class system board but I’m not using a Xeon. Your power supply is vital so you really shouldn’t cut corners. If it were me, I’d get something in the platinum class, a sine wave UPS, and a PSU that can handle at least a Kilowatt. But that’s me. I don’t have enough information about your situation or your requirements to put anything in stone here.

I hope this helps :slight_smile:

Obviously. You may want to reread the last sentence of my reply again, as I worded it precisely right :wink:

(no offence, of course. I’ve run server(s) at home since 2008 and currently upgrading my local network to 10Gbit and a high-availability SAN/NAS environment with 2 nodes (maybe a 3rd?) and some 40 bays for 2.5" and 3.5" drives. Work in progress as 16TB drives deplete my allocatable funds rather quickly :frowning: )

I hear you. In fact, I did debate quoting your thread, before explaining why it is important NOT to think that a TDP of 300 will render a safe margin for a 450 Watt PSU. However, twin_savage did state that many PSUs, even in the 1KW class, don’t have sufficient amperage on the 5V rail, so I decided that this was sufficient input in that regard.

To clarify, twin_savage makes a very valid point: Amperage is often overlooked by enthusiasts and PC builders and this is only one more reason why the greater the number of drives one adds to their system the greater the risk of system failure. One can only draw so much “oomf” from a single rail. This is why I suggested making 1kW the minimum standard.

Computerz didn’t really leave us much in the way of requirements so it is difficult to suggest what sort of PSU they would need. I chose to err on the side of caution. Quoting me out of context like that can be misleading, but no offence is taken. You did indeed word your single sentence correctly. Again, I will put my statement in context by quoting myself here, for the sake of clarification:

‘Another important thing to remember is that just because your TDP = X this doesn’t mean that an extra 100 watts or so are so over that will be enough. A CPU isn’t the only thing that will draw power. Do you plan on using peripherals? Are you using mechanical hard drives or SSDs, or both? Exactly what else do you plan on plugging into that server and do you know for a fact that no one else will be adding anything to it?’

I’m fully persuaded that when it comes to servers you could talk circles around me. I’m not an IT, not certified, and servers are not my forte, but I try to help out where I can. Given my limited 22 years of tearing apart PCs and rebuilding them from scratch and having a modest background in electrical, I’ve learned a few things over the years. Ironically, I have worked on commercial/industrial servers only because where I live (in a very isolated community in Canada’s G.W.N.) it’s not always practical to get an IT to fly all the way up here to solve a simple task. Up here, it’s all about the oil and time is money so yours truly, with a natural knack for things, comes in handy now and then.

Dutch_Master I could probably benefit from your advice as I am also in the middle of a project and I’m thinking of improving my all purpose abomination in the way of a family server. For the time being I have my hands full, repairing PCs for others, and recovering roughly 20TB of data from a catastrophic system failure compliments of Windows Recovery, so I can relate to works in progress. :joy: I can’t even afford to look sideways at 16 TB hard drives. I have one 12 TB EXOS to sort files to and I’m waiting on another 10 TB HGST drive to make its way to my door.

I’m hoping I contributed something useful to this thread for our readers and I’m hoping Computerz will also elaborate a little more about what they’re looking for in the way of a server apart from how many drives they intend to use.

1 Like

I remember I had this problem when I built my workstation. I remember there was somewhere a rule of thumb that you add so much % to what your max combined tdp draw is or something. I remember it made sense because headroom is usually good.

Does that still apply these days?

If you’re going for so many drives and rack mount, why not get a used diskshelf? They come with the powersupply to run all the disks, and then you can use any old server with a SAS card to interface to it.

Btw, the average 24 diskshelf uses a ~750W PSU, usually with a second one for redundancy.

2 Likes

Thanks for the link, had a read. I was hoping for a more accurate figure :frowning: .

What is SPR?

Thanks, will research ATX standards, was not aware of this.

I just wonder how those power calculators derive their power figures for all those CPUs. Do they test them and see how much power they consume?

Supermicro has a 24 bay case which costs $5k. No thanks.

I got a motherboard that natively supports 12 SATA drives which is exactly the number I need for now. Going through the manual, I couldn’t find any staggered start options so I’ll need to provision enough power for this in my PSU. For the additional 12 drives I would need an HBA but that’s some time away. Either way, I’ve got the power profiles for the HDDs from Seagate’s Product Manual.

@CHESSTUR I haven’t finalised all the details yet, just worrying about the CPU right now, one thing at a time. I’ll definitely be getting some nice PSU, not trying to cheap out at all but don’t want to get some overpowered beast. Don’t want to waste money.

But my plans include:

  • My mobo can support 12 SATA drives out of the box which is what I’ll be getting to start out with.
  • If or when I expand, I’ll add another 12 drives as I’ll be using ZFS. So I’ll need 2 more HBAs for that.
  • I’ve got a Quadro K4000 graphics card which I want to use for compute. According to spec sheets, it consumes 80W max.

I’m not sure if that’s right because:

NVIDIA Quadro RTX 4000 Specs | TechPowerUp GPU Database.

suggests the RTX 4000 consumes 160W TDP. According to NVidia RTX A4000 consumes 140W max. So it would make sense for the RTX 4000 to consume 160W max. Those figures are coming from NVidia after all. Either way. I would like to ability to upgrade my K4000 to an RTX or RTX A card in the future, so that means as I said, an extra 160W.

  • I want to add at least 2 NVMe SSDs through an expansion card as my mobo supports only 1 NVMe out of the box, so that’s an extra PCIe slot gone. Maybe I’ll go for a quad NVMe PCIe card but unlikely.
  • I want to put a 10Gbit network card.
  • Finally… lots of fans.

I haven’t looked into how much power all those components consume. But I’m guessing you just add them up and off you go.

@gnif That’s the first time I’ve head the term “disk shelf”. I researched JBODs before, but I’m not at that storage capacity requirement yet but that was the route I wanted to take. A cursory look suggests a disk shelf is just a smaller version of a JBOD? Either way, my plan was to get a case that could support 24 drives and then get a JBOD for subsequent storage needs.

JBOD is not directly tied to any kind of hardware, but rather how the disks logically appear to your OS. Hardware raid for instance would present the logical configured raided drives to the OS, but JBOD would present the physical disks to the OS to manage directly.

A disk shelf is literally just that, a shelf of disks with a PSU and some kind of interface to another system, most commonly SAS. How you decide to interface the disk shelf to the system determines if it’s JBOD or RAID.

You can usually get cheap used disk shelfs for $100-200 AUD without disks, you might have to spend extra to get the caddies as they are often not sold with them. In the end you get a enterprise grade solution with dual redundant power supplies capable of talking to both SATA and SAS drives (which are cheaper on the used market) and a properly engineered cooling solution as 24 spinning drives generate a ton of heat.

I think you are mixing up JBOD with NAS (Networked attached storage). JBOD is literally just a bunch of disks, not the hardware to operate them or share them out. You would still need a motherboard, CPU, RAM, etc to run them.

The disk shelf also handles staggered spin up so as to prevent PSU overload at initial power on, and gives you lots of nice features like drive indicator LEDs that you can turn on via the command line, so you can identify which physical drive to pull if one has failed and needs replacing.

I have done the whole stack of disks in a big case thing, after moving to a disk shelf, I would never go back. Cooling is more of an issue then you would think with this many drives. It’s also really nice to just have a single SAS cable or two from the disk shelf to the server rather then having to mess about with 24 SAS/SATA connectors and cabling.

@CHESSTUR I haven’t finalised all the details yet, just worrying about the CPU right now, one thing at a time. I’ll definitely be getting some nice PSU, not trying to cheap out at all but don’t want to get some overpowered beast. Don’t want to waste money. Basically, my use is quite simple:

My mobo can support 12 SATA drives out of the box which is what I’ll be getting to start out with.
If or when I expand, I’ll add another 12 drives as I’ll be using ZFS. So I’ll need 2 more HBAs for that.
I’ve got a Quadro K4000 graphics card which I want to use for compute. According to spec sheets, it consumes 80W max. [/quote]

Okay, so tack another 100 Watts onto that 450 figure and start adding the wattage per hard drive and the rest of the peripherals you’ll be using. It’ll be easy to find yourself in the ball park of say, 750 to 800 Watts nominal. This is why I recommend erring on the side of caution. You may be happy with only 24 drives for the present but what about future proofing? What with the smart PSUs they use today it’s not like your system will always be eating that much power anyway unless you’re running everything full bore 24/7, including your CPU. What would you call an over powered beast? A modest Thread Ripper would tear my beast to shreds.

I run 34 drives on one PC. 4 of those drives are external. I use them for backups. So, 30 inside the case. The case cost me around $300 Canadian. (Admittedly, that was a good deal back then.) Here’s the case:

So now, the math: 12 TB EXOS under sled #5. (1) 10 TB HGST in sled #5 (2). Sleds 1-4 Toshiba CMR drives, 2x3TB and 2x4TB both pairs in RAID 1. That’s a total of 6. A small raid cage in the front for RAID 0. (8) Another raid cage in the front connected to an LSI HBA card in IR mode sporting 4 Velociraptors and 4 SSDS in RAID 10 (that makes another 8) so now we’re at 16 drives. Open the door on the case and there’s another cage inside supporting an additional 6 drives. That’s 22. In addition to this I have another 8 drives natively connected to SATA, all of them being SSDs because they’re slim and can sit nice and neatly on top my blue ray CD/DVD ROM drive, which in itself, is yet another drive. So, 22 + 9 = 31. Then there are my two NVMe drives so that’s 33. Now, I just recalled that I unplugged an SSD from my HBA IT card because port 6 kept hijacking the port 0 position in my interface for some mysterious reason so I’m sitting at 33 drives (not counting the 4 external hard drives I use for backup). So all 33 drives are inside the case with one more port to spare and plenty of extra Sata power handy. All my drives are hot plugged and a good many of them are in sleds.

Sure. It’s not a server case. It’s an EATX case and it serves my purposes. My drives are surprisingly cool except for the NVMe, which I have little control over. (I suppose I could invest in some thermal pads.) I made several modifications in the way of air cooling, but the whole thing runs amazingly cool and quiet compared to a server case. I never experience thermal throttling and my CPU is OC’d to a modest 4.2 GHz, which isn’t too shabby for an old 6900K chip.

So I recommend getting an old EATX case regardless of the size of your mobo and a CPU that can give you 40 lanes of bandwidth. (Or more) You don’t have to go with the latest greatest bleeding edge, state of the art, hardware to do this. In fact, if you shop around you can get a used CPU that can give you this sort of spread for a song. Some folks would call my PC an old relic. Others would call it an over powered beast. I don’t care what they call it as long as it serves my needs.

There are plenty of people here that can help you with building a server. I’m just offering a little in the way of advice where I have direct experience and knowledge. I’ve had a lot of experience with hardware but when it comes to setting up servers I’ll defer to the experts here. I hope this helps. :slight_smile:

@CHESSTUR I can relate to your heritage as mine is similar. Except I’m not in the oilfields :wink: (I work for a company that makes cables that occasionally make their way to the oilfields, mostly offshore ones :stuck_out_tongue: Apart from Maritime our customer base includes Industry, Medical and Military. Chip-machine maker ASML is a customer from one of our customers and we know our products end up in $$BIGMONEY stuff from ASML :wink: )

To assist aspiring home-labbers, here’s a list of what I bought to expand my “network-empire”. Prices you’ll see will differ from what I paid for it, really a case of YMMV!

HTH!

1 Like

fancy intel speak for sapphire rapids xeons.

I highly doubt it, very likely just a simple Σ(TDP of all installed parts). The last time I used one of those PSU calculators had to have been ~20 years ago back in the pentium 3 days though.
Any old timers remember this site?:

That’s funny, I went through the exact same decision point; decided for the money I could cobble together a much better solution than what supermicro was offering (atleast with the chassis, I ended up using a supermicro motherboard).

Motherboards very rarely have staggered start options, although I don’t expect you to have power trouble with 12 hdds on even a relatively weak 20A 5v rail power supply. and for reference here an XPG Fusion Titanium 1600w PSU can only do 20A on the 5v rail; those high wattage numbers are coming from the 12v rail.

​​​ ​ ​
​​​ ​ ​
I’m going to make a post about 5v rail power usage when I do my build log. Going to run 30 hdds off of a corsair hx850i power supply, the interesting thing is that there’s a software interface to the PSU that will measure how much amperage is being used on the different rails; If I get uncomfortably close to the 25A limit I’m going to get another power supply.

I would be very interested in watching that video. Be sure to post it here, in L1T, on a main thread so I don’t miss it. I’ve never regretted purchasing my 1kW Corsair Platinum PSU. I always knew I’d be putting it to good use. I’m also wondering what sort of UPS would you recommend I use as my 1000 Watt sine wave is getting up there in years and I’m thinking of passing it on to one of my daughters.

  • CPU: Intel 6900K X-Series Broadwell E
    For a time they were selling for around $2000 Canadian but you can get them used for about $200 now.
  • Mainboard: ASUS X99-EWS USB 3.1 (Thunderbolt 3 ready) I do use TB and perhaps that’s why I’m in no rush to RAID my NVMe as it is fast enough for my purposes.
  • RAM: 128GB Corsair Dominator Platinum DDR4 (Samsung B-die) No complaints here. Probably the most stable RAM I’ve ever owned.
  • PSU Corsair HX1000i (Platinum) purchased when Corsair wasn’t just a name and actually made quality products.
    *NVMe Western Digital Black SN770 - Good value for the money and much faster than the Corsair garbage I bought. Never bothered to RAID the stuff. Really don’t have a need to.
    *CPU cooler: Noctua. Wish it could have been the NHD-15 that I gave to my wife, but it wouldn’t fit right because of my double banks of RAM on both sides of the board. Had to special order my cooler from Austria because US and Canada didn’t have it in stock anywhere I looked. Not sure about model # but it seems to work well with my shotgun cooling design and modified fan array.
    *Controller cards: internal RAID (IR) controller/HBA card: LSI/AVAGO/BROADCOM 9207
    Can buy these for a song on Amazon and Ebay. They will work alongside Intel’s soft RAID without conflict, giving you more flexibility in your choice of RAID.
    internal passive (IT) controller/HBA card: LSI/AVAGO/BROADCOM flashed for 9207 giving me further expansion and works alongside IR card without conflict. In addition to this I also use the native Intel software RAID, giving me two extra arrays in RAID 1 and a single RAID 0 array

I plan to make a thread on the actual build as soon as the case is complete; Also want to benchmark an apples-to-apple (as much as possible) hardware RAID vs ZFS setup… I expect the results will surprise some.

About UPSs; for home use, the overwhelming majority of people are only running “line interactive” UPSs that don’t really condition the power at all until something really bad like a brown out, blackout or surge happens so the sine wave-yness of the UPS doesn’t really help you until power fault occurs (and even then some of the bad power is let through to the computer before tripping over to the sine wave inverter).

What a lot of people think their UPSs do is what a “double conversion” UPS (sometimes also called an online UPS) does, just most people don’t have them.
I am actually running a double conversion UPS, but I have it in line interactive mode because it is only 92% efficient in double conversion mode while 98% efficient in line interactive mode.
If I knew I had sketchier power in my area I wouldn’t make that trade off for efficiency.

Will be looking forward to it. :slight_smile:

I don’t think I have this double conversion deal you mentioned although my UPS does “talk” to my system. Brown outs and power surges are common in my area and my UPS has ‘saved the day’ more times than I care to count. I’m running six of them in my computer room but only two are sine wave. I have noticed anomalies with the other PCs on reboot using the standard square wave UPSs after a power outage so the sine wave jobbies must be doing something right. The other PCs are mostly for my daughters so they’re not as crucial (although the elder of the three is getting into coding and is desirous to get her degree in computer science so…). I was just wondering how essential it was to run sine wave UPS.

@gnif No no, I think I have the gist of what a JBOD is. I meant to say that I want to get a server in a case that supports 24 drives and then get a JBOD/disk shelf and connect it to the server via HBAs when additional storage needs arise. But thanks, I’ll add that to my watch list for the future.

Yes, cooling is something I’m mindful of. That’s why I wanted to get a 24 drive 4U case, because those take larger fans so you can spin them at lower RPMs to get decent cooling. I watched a video where the guy had half his bays populated and he staggered his drives for maximum airflow and minimal parasitic warming, otherwise they were overheating. I often wonder how JBOD/disk shelves cool their drives because they are packed in tight! Having 24 drives in close proximity is toasty enough, having them in many rows one behind the other, to me means one thing, you need an air conditioned room otherwise you’ve got no hope of keeping temps down.

@CHESSTUR I’ve already purchased a mobo and a CPU, going enterprise grade as much as possible. It’s overkill for todays needs, but I hope it hums along for a long time. If all the parts arrive I’ll have 80 PCIe lanes to play with, so it should be future proof enough for any future upgrades that I may desire. Which is why I’m going this route in the first place. Otherwise I would have just got an ordinary consumer grade mobo and CPU but i wanted those lanes.

@twin_savage SPR, got it thanks. Yes sounds plausible that they don’t know the figures themselves. Speaking of your case build, saw that thread a few days ago and I finally put two and two together and realised that it’s your project.

I’m planning on getting a UPS. From what I’m reading, NVMe + ZFS isn’t a good mix if you don’t have a UPS and you get a power outage. Plus, the amount of money that I’m sinking into this, I think UPS starts to make sense.

That’s awesome to hear! Myself, being your average PC enthusiast (okay, maybe not-so-average) I’m happy with 40 lanes. This is partly why I recommended going EATX as you didn’t want to shell out for a ridiculously overpriced server case. So many make the mistake you wisely avoided because you knew you needed the bandwidth. You won’t need those annoyingly loud server fans either, if you do it right. My drives all run surprisingly cool in this case without having to resort to screaming server fans. I’m thinking the huge intake fan on the side panel and the equally huge exhaust fan at the top have something to do with it. 33 drives in this unit serves me well.

I neglected to give you the front view of my Abomination in that picture I shared so I’ll post one now. Keep us posted too. You have an interesting project in the making. :smiley:

@CHESSTUR I remember when that case came out. It looked super futuristic. I wanted one for a long time.