How to get RTX 4090 into a rack server?

I hope this is the forum category, the enterprise part looked to not be for this kind of thing.

so what I want to happen is to use those sweet Nvidia GPUs for some (many) simple calculations. GPUs seems to fit well for the task. But, the enterprise grade / data center cards are too expensive to make GPUs viable, it is better to just buy many CPUs.

so, how would one cool a couple of Nvidia RTX 4090 in a rack server?
it should fit 2, 4, maybe 6 GPUs.


… Liquid Cooling?

would it be a reasonable effort to assemble?
and would/could it be reliable enough to put in a data center? very far away?

If you want reliable and certified, get datacenter GPUs.

There are GPU servers from several companies that use liquid cooling to get consumer GPUs into a rack server. Comino being one of them. They seem to sell 4x4090 servers now.

You can’t fit 6+ GPUs on a normal board. You need specialized boards or mining rack setup to do this.


Does the software show a bias, between CPU and GPU usage?
Do the programs involved, not care between Consumer/Data variants?

That formfactor you’d need to research on, on point(s) of compromise
Typical mainboards, won’t physically/electrically support that many cards of girth
Server/workstation mainboards, won’t physically support that many cards of girth

Likely would need to string up a bunch of riser cables and space out the gpus appropriate
Your workspace, would need to make such accommodations [see mining rigs]

Cleaner option being specialized rack cases, that will have necessary riser/daughter boards
May not need to be latest platform, pending on its PCIe / power cabling / cooling / etc.

the software likely cares, but I see it more as a fun opportunity to make the software run on GPUs.

regarding the mainboards not being able to support the cards is something that I had not considered before. sounds somewhat problematic if the cards would be too heavy etc. but electrically it should be fine from a MB point of view, no? isn’t it PCIe just as any other PCIe.

well, certified is not really what I am looking for. reliable, well, I assume that the GPUs would work as well as they do else where, in gaming rigs for example. in which case reliability is not an issue of the GPUs themselves.

reliability concerns comes more into play with a potentially custom water cooling solution.

thanks for the Comino link, I had not heard about them before!

Camino is really good. However they are damn expensive.

As all enterprise grade stuff are.

Build quality matters and you really dont want stuff leaking into a rack densely packed to the brim of other equally expensive items.

1 Like

ye leaking onto other expensive equipment would be bad.

Most typical mainboards, aren’t working with much PCIe lanes and physical slots
Once you go past (2) GPUs, you’ll not just physically be defeated, but PCIe divvying be CRIPPLING

It’ll be likes of a workstation / server platform, to make such accommodations… to a point
Dependent on per mainboard, some may have full pcie delivery, but only 2 x16 physically
Whereas others may physically fit more, but not fully electric [6 slots, only 2 have full x16]
Dependent on platform, would also dictate how much PCIe lanes, that you’ll have to work with
Physically, these chungus’ of gpus, will be overlapping multiple plausible PCIe slots
when directly connected to slot [lets say 2 at best, with whats been presented so far]

Entertain riser cables of suitable lengths and having your GPUs basically be hovering
Your rig could very well end up like this, with some elbow grease [suitable AC may be req’d]

How? Big rack-mount chassis… the end.

What you’re describing is not going to be an inexpensive DIY system. Six 4090s are going to need an insane amount of power delivery and will draw up to 2700W from the wall if you can even pack that into a single system. That’s for the GPUs only and is more than most residential circuits will even support, so I hope this isn’t for use at home.

Nvidia has a handful of 4U, 6U, and 8U GPU rack chassis with up to 3000W of power delivery that will handle six 4090s. GPU Accelerated Servers for AI, ML and HPC | Supermicro

- a chassis like the CSE-418G2TS is what you’re looking for, but they don’t sell these individually and so you’re looking at buying a complete barebones system, they run around $7,000 IIRC which isn’t bad all things considered.

If you want to do this ‘cheaply’ then I think you would be better off getting several smaller systems and running them in a compute cluster + adding more as necessary instead of trying to stuff everything into one chassis.

FWIW, you would absolutely have to put this in a 4U server. It will definitely not fit in 2U, and I doubt it would fit in a 3U. Most good enterprise servers from typical vendors like HP, Dell, Cisco, Lenovo or supermicro make GPU servers in 2U…and they typically only have 1 6 or 8 pin available per slot. So likely, you would have to build out something semi-custom.

You could fit 2 of them in something like this…maybe…
CX4170a | Sliger

Would need Noctua iPPC 3000 RPM fans and not normal 1500 RPM ones though lol

I second the comino products. Their prices are actually cheaper than EKWB when it comes to water blocks. The quality is in line with some of the best. I was going to use one of there waterblocks for the GPU, but I did not learn about comino until later.

The only thing about comino products. You need to make sure they offer a water block for the card your looking at. I had done a LOT of research on comino and right now looking at one of there motherboard waterblocks for my next build. When it comes to the 4090 comino only deals with Gigabyte and Nividia GPU waterblocks. So they deal with waterblocks then are generally cost more. So if it was I and I Was considering a rack server and wanted to really take advantage of the marketplace. I would pick a 3090 nividia founders edition card. You can get them for almost pennies on the dollar right now, and the comino waterblocks are about $250 each. I know you mentioned the 4090, but you load load up a rack server almost 2 to 3 to one.

from reading the replies in this thread, I now understand that what I previously though was a cheap alternative, is actually quite difficult / expensive to pull off. Going for many small systems is looking increasingly attractive.

1 Like