Poweredge R900 for $200 worth it? Potential Multi GPU support for it?

Specs(from seller):

  • 4x Xeon E7450 (24 cores total)
  • 128GB DDR3 ECC
  • 4x 15000Rpm Drives, 465GB Each
  • appears to have all its necessary bits

My use case:

  • originally wanted to stuff at least 4 GPU’s in there for GPGPU/Crypto but i dont see any power connectors and each PCIe slot doesnt look wide wnough for a dual slot GPU. This is my absolute main goal for my server, everything below is secondary.
  • If the above cant be done then it’ll probably be turned into a VM hosting a NAS, multiple game servers, VPN, maybe email, and somilar purposes
  • Power cost isnt an issue, its included in my apartment. Noise might be annoying though i might be able to figure out a spot to hide it.
2 Likes

wow thats a kickass deal

1 Like

Do you have any experience with this particular server, or know someone who might?

1 Like

I have an R710. So yes I have some experience dealing with poweredge servers

The 900 series are beasts, and if what you say is true and getting is for only $200 (is that with free shipping included)? Because these things go for like $700 used; with a lot less spec wise.

2 Likes

The server is actually local so i might be able to convince the seller to drive it over for me (i dont own a car). Its good to hear that the servers got some value to it so i probably will take it.

Do you know how well it might handle getting GPU’s stuffed into it? A quick google search didnt pull up much info on it, pics look like it doesnt play well with dual slot cards and i couldnt see how to possibly power the GPU’s.

1 Like

this alone makes it worth it XD

3 Likes

These servers are mostly used for compute nodes. So this guy would make one hell of a vm box.

1 Like

Yeah i was a bit shocked as well by its specs, though if it can support something like 4x R570/R580/GTX1060/GTX1070 then that alone would make this my absolute ultimate machine, which is why GPU support is a bit of a concern for me. If they dont fit or if power delivery is an issue, ill try to use as many GPU’s as possible regardless and just use it for VM tasks just as @Dynamic_Gravity suggested.

This will also be my first dive into enterprise server gear so its a bit exciting for me

1 Like

You will need a rack and rails for this beast.

1 Like

At most i have an ikea lack table that i can convert into a makeshift rack but this machine will most likely either sit flat in a closet somewhere or sit on its side in a closet. Space is at a premium where i live

1 Like

You could get a 4-post 12U rack (wheels?).

That thing is gonna be heavy to lug around.

1 Like

Yeah i can imagine itll be a pain to carry. Eventually i do plan on getting a proper rack though for now ill have to suffice with something else. Now just isnt a good time for a rack for various reasons

1 Like

For GPU mining it’s not ideal as a basic board with 5/6 PCIE slots and a low power CPU in an open case rig would be far more efficient and less likely to be affected if you’re running tasks on the above multi-CPU beastie. When I was mining ETH my rigs were consuming ~80W per GPU and were barely audible above ambient even sat in my front room. That server will draw as much power just idling and will sound like a banshee :slight_smile:

Believe me I’m literally sat by a server room now and with the door shut I can here the servers and air con running quite clearly. I have ear defenders for when I stand in there for any amount of time.

1 Like

i guess thats fair. due to space constraints i was hoping to essentially consolidate all my potential projects and use cases into a single, beastly machine. I was building a separate rig specifically for mining but that one ran into unforeseen compatibility/usability

though its good to hear that the overall consensus is that this rigs the deal of the centuiry

1 Like

Not to derail my own thread but as an alternative, i was looking into building a rig specifically based around running multiple VM’s and mining on the same machine, though cost was a huge issue and its hard to find a case that supports 6-7 dual slot cards and still can accomodate enough PSU’s and CPU power. Any hardware suggestions?

1 Like

Yeah, in the computer room (also server room) where I work the ambient noise is 58 dB.

Servers and AC are loud. :confused:

1 Like

Yes I know this is old but, I do but hi folks and Dell made some tough iron, back in the day, to quote a guy that posted the DRAC recovery procedure on these :)) Somewhere… maybe on a Dell forum.

I was just scouting the webz for some spares, my r900 just died the other day. A psu blew and was shorting the other one too…
Probably an easy fix like a fet… or four.

Anyhow, I’m a bit curious about OP’s experience/result. Bought mine around 2014-2016, and I did it for pretty much the same reasons. Bought gpus for it a bit later.
It took a lot of hacking to get it working with only four modern gpus… Initial tests were done with just two old Quadro cards and got my hopes up. But it’s not easy.

Does not fit gpu cards without notching the pcie slots on the M/B and without converting the double-size cards to passive cooling, like removing shrouds, fans and maybe the 12v connectors. The slot spacing is 1.5 standard slots not 2! Using single-slot thin cards will make things easier.

The brackets need to be cut or removed and other means of securing the cards must be employed.

I’ve cut the lid and made a plastic extension to keep the airflow right and fit vertical 12v plugs. I might make it again from galvanised steel sheet… and rivet it. Maybe.

I took 12v directly from the PSU combiner board bus bars. Removed some insulation and bolted 2x2 12awg cables that went to a diy 12V PDU then a set of 12v atx cables about 6 inches long.
I jumpered the #present pin on every pcie slots used so the 16x cards would be seen in the 8x slots. Poked wires directly in the slot and routed around. (See pcie pinout)
More than 4 pcie gpu cards and none will be addressable correctly. Never tried dual gpu cards but it will probably act the same as there are not enough resources to handle so many devices on the pcie bus, even with all the pex chips in there. A modern video card is a complex device that is a compute node in itself with lots of extra stuff like hdmi audio, more than one video rendering block, etc.
None can actually drive a monitor. CRTCs are detected but require impossible window address space. We’re stuck to remote or the embedded 32 or 64mb ati chip with vga.

Today this machine remains more of a collection item with it’s four rx570 4g cards, 32gb or ram, just 16 slow cores and gets turned on just for a laugh or a hashcat/john grin :smiley:

Pulls 500W idling, 1.3kw in full load. Can’t control the fans from userland so I just trigger the intrusion switch to get full speed cooling, needed when the gpus are at full load.
I had 64gbs of ram in 2bg sticks but downgraded to 4x8gbs to save power. (16w/stick)

These were never ment for mining or gpus. A lot of dremmeling and soldering and firmware fudging is needed. And a Perc battery will cost you a limb!
But if taken care of… will still boot after ww3.
And if anybody has ideas for hacking the address space issue I am still open, LOL!
I still have a wet dream with 7 water cooled cards in it :)))

1 Like

Please don’t bump a thread that’s 4 years old

especially knowingly

If you have that much to say make a new thread of your own