DELL PowerEdge 7615 configuration and OS choice

Hi there,

I’m looking to buy a workstation from DELL, I found Threadripper Workstation “Precision 7875” that will fit my use case (CFD purpose), but for about the same price I could also go for a rack “poweredge 7615”.
Since I hope I could continue this work and build a little server next year, and that I can go further to play with genoa X, I want to go for the Epyc Rack but I would have some questions regarding configuration and OS.

I plan to configure it with :

  • 9384X, 32 core 3D cache
  • 12 RDIMM (max bandwith) 384 or 768 Go
  • 800W redondant PSU

Unfortunately it seems I can’t select PCI riser since CPU power is too hight…
Do you know if we can add this anyway ?

If I bought RAM from an other vendor, is there some specs limitation to be sure to boot ?
A least RDIMM at 4800 MT/s, can I go higher ? What sub timings ?

Then storage, i’m lost here.
For CFD purpose I would want an Ok’ish boot drive, a quite performant drive to save unsteady simulation and good old rust for “archive”.
On a workstation I would have choosen M2 (sata should be good too) for boot and perf drive and Sata Hard drive for archive.
There are a lot of SAS drive and I don’t know what to choose, I can’t get how to use M2 there, or U2 maybe.
There is Dell’s BOSS thing, what does-it provides ?
The thing is that I want this server to provide a backup for other workstation too.
Might want to buy from else where to drope a bit the price.

Finaly wich OS ?
I want to get rid of Windows, there are some linux possibilities but by default they are only provide with a Dell subscription, do you know it ? Would you advise it ?
Or should I go install myself, if yes what distribiution : Debian, Fedora, some thing more tight to server environement ?
And how to moove from this single rack to a little, 3 node or so, computationnal cluster next year ?
Might open a different topic for those point but every advice is good to take.

I hope this topic is readable enought, thanks for your help.

PS: I’m from France, I think I have to shop from DELL but do you know some server vendor to compare, and also an administration service (installation and surveilance).

Debian is the solid option. If you need more up-to-date kernel, Fedora.

That was exactly what I intended when wrtinting them.
Do you know if Epyc X is gonna be okay with a debian LTS ? Might be to fresh, is’n-it ?

If I have to do the installation and maintenance, I am more used to debian flavour, but a new distro is a good opportunity to learn.

Is there some folks used to Dells servers and their politics ?

Especially regarding RAM, Storage, connectors, and user modifications with non Dell hardware ?

I was also wondering, is 5200 MT/s achievable on 12 DIMM ?
I mean it is supported on Threadripper, so it might happen on Epyc too.

Will you be using this system from the same room in which it is located? Haven’t used those exact workstations or servers, but have used similar workstations and servers from Dell. If you are going to be in the same room, I recommend a workstation. Servers tend to be loud and boot very slow in my experience. They have those small high speed fans that get very loud.

It will be in a separate room with AC.
You are true don’t work next to a server, it is not intended to share your room and be quiet.
Workstation are noisy enougth in load…

I think I got throught the configuration on the Dell poweredge and I should even go for the 2P varient.
I was wondering if the BOSS thing where mendatory, as the sellman said me.
I mean i’m fine with a software raid, or even without raid, since it is for testing propose and not the final build.
But without this card I have to use one if the front storage baie to boot I guess.

For the distribution my IT advise le to go ever Rocky linux or Ubuntu Server.
Any thoughts on this one ?

The current LTS release of Debian is 10 “Buster” which is installs kernel 4.19 by default. This is very old in comparison to Fedora 39 (shipped kernel 6.5) or even the next Debian LTS 11 “Bullseye” (as at Aug. 15, 2024) which ships 5.10.

I think you want at least kernel 5 for this CPU but not sure the minimum point release.

There would be newer kernels available in backports repository for Debian but they are not as well-tested and I don’t know how recent the builds are there. You can view the repos online though.

Dell servers are as accommodating to 3rd party hardware as any, and there is a healthy after-market for upgrades and pulled parts. Just search eBay for “R7615 384GB” or similar.

You can plug-in most any off-the-shelf SATA or SAS drives, but that’s assuming you get a proper backplane with your server that supports the number and type of drives you want to use. If you want higher performance, an eBay search for “R7615 nvme” will give you some options.

As it happens there was a write-up on an unsupported method of using NVME in much older Dell servers, not long ago, which might be a good read for you about server mods and options, even if you don’t go that way:

There are limitations to servers in general… You aren’t going to find SATA power connectors dangling around in the case available for you to power accessories.

As for OSes, you’ll find a good array of options, and better Linux support than desktops or workstations:

Supported operating systems
The PowerEdge R7615 system supports the following operating systems:
● Canonical Ubuntu Server LTS
● Microsoft Windows Server with Hyper-V
● Red Hat Enterprise Linux
● SUSE Linux Enterprise Server
● VMware ESXi
For more information, go to www.dell.com/ossupport.

Source: https://dl.dell.com/content/manual24757900-dell-poweredge-r7615-installation-and-service-manual.pdf?language=en-us

@compy386 I was dumbly thinking there were a Deb 12 LTS, so that is an easy one. I want to have the best hardware suport possible “strait” out of the box.
Compiling with my IT and Dell OS list the choice is more between Ubuntu server and a RHEL flavour.

@rcxb many thanks for these information, i’m not use with server part and the proprietary thing is my biggest concern.
The second is that I miss the correct option to let me add some thing later or that I don’t know which connector is it. I see Dell has lot of small connectors.

For the SAS bay I opt for a modest 8 bay SAS since it should be enought for this project and I think it is the limit anyway whit this CPU. The controler is a “PERC H355”.
If I wan’t to use M2 or U2 i would need a 3.5" adapter and put it in the front SAS bay, isn’t it ?
I want to get rid of the BOSS thing for this build if I can.

I also ask for the PCI extension thing, even if it is not compatible with the CPU. I don’t know if it is space or thermal constrained but since this part is for testing purpose only, I would figure out in anyway.
It should come with power cable on top of the PCI riser ?

I think I’m close to the final configuration.
Would you recommand any thing in particular ?

NVMe drives do slide into front drive bays, but they’re not SAS. This is where buying a server with the wrong backplane will bite you, which is why I mentioned it.

You’ll notice that SATA/SAS and NVMe are two different Backplane options in their server customizer: PowerEdge R7615 Rack Server | Dell USA

And you did well, i know they are different connectors but I totally miss the NVME backplane otpion !
Since I choose 3.5" baie first the nvme option disapear.

Unfortunately it seems to be a choice to do ever SAS or NVME, I think I will still go with SAS for this one and will see how I manage futur nodes.
To get an SSD I have to choose the BOSS thing which give me an M2 for system, or go with a SAS SSD.

Yeah it is annoying that to this day Dell does not have any M.2 slots on the system board for the OS/boot device. It’s either lose a drive bay somewhere or a PCIe slot for the BOSS card (which BTW don’t use NVMe devices but rather AHCI).

Little update here,

I’m almost done but i’m considering buying RAM from third party, any recommandation on this topic ?
I was planning 12*32Go DDR5 RDIM at 4800 MTs on Dell’s configurator, I will stick with that to ensure compatibility unless you told me it is fine.
Any model in mind ?

I would love to go higher to maximise bandwith but i’m afraid it will not post.
Dell configurator only allow 5600 MTs on the 96 Go memory stick variant and don’t go higher, but it is not mentionned on the service manual i’m reading and i’m quite unshure for 24 DIMM.
Any thoughts ?

Those thousends bucks saved could be invested on some GPU later on. :slight_smile:

I will (just once) repeat my prior advice:

If you don’t find any vendors selling higher speed RAM kits, there’s probably a reason why not. These are not some newly released model of gaming mobo, they are a common platform that has been out a while. Lots of people have the same hardware and have financial incentive to figure out what works properly in it.

I suggest limiting yourself to what Dell or 3rd party vendors say will work, and not risk significant money on experimenting. Servers don’t really try to offer/support a wide range of upgrade options, just getting (exhaustively) tested with a few that are known-good.

I have had good luck with RAM from memory.net

Sometimes they have has the best prices I could find anywhere.

I didn’t understood your advice this way, you are right it is a good idea but I don’t know if it is that reprensentative for 7625 which is quite recente to me and not as common.
I found only few system sold but all of them were at 4800 MTs.

I understand your point of view and kind of agreed, but at the same time i’am here on a forum with enthusiast and knowlegeable people, i think it is the right place to think further and to find people who try it.
Of course for the moment my safe plan is to get 4800 MTs RDIMS.

@Quetzalcoatl thank you, unfortunately I would be tight to my entrpise’s reseller…

Regarding the memory speed for R7615 or similar.

Will the system boot with 5600 stick? Will the BIOS downspeed even if the stick doesn’t have a lower speed profile? Is it possible to manually configure speed and latencies in these BIOS?

For example, there are new and more affordable 2R 96GB 5600 which were released for Intel platform. Such as:
Dell PN AC828668
Cisco PN UCS-MRX96G2RF3
Generic Kingston KSM56R46BD4PMI

Didn’t see your message until now.
I have the system for little time, it is just configured and tested for my use case.
I instal Ubuntu Server and the process was seamless.

Yeah, I saw 96 GB 5600 DIMM but these are too much memory (and expensive) for my usage.
I would bite 32 GB 5600 but not more. Plus I’m not sure if the configurator allows 24*96 DIMM wich could indicate that you have to use fewer channel to use higher mem speed.

Unfortunately I haven’t any 5600 MTs DIMM to test that.
I can only say that the system work with Kingston 4800MTs 3rd party memory, and that if you want to cheap out you can mix the Dell memory and the Kingston one.
And I found the BIOS is pretty poor in advance tweaking, but this not new. Can’t remember seeing RAM timming tweaking for exemple, but there is a scrolling menu for RAM speed I think.

Funny thing I start noticing DELL listing more 5600 MTs DIMM (different capacity) on its configurator since September.
They don’t update the notice so I still don’t know if it is stable for a 12 channel configuration.
I will try to get the answer to clarify this, unfortunatly I allready bought mine.

It seems a message have been deleted here.
I have Dell’s sell rep confirmation that 12 channel is supported at 5600 MTs in every capacity, despite doc not updated.

Do you have some advice for networking between 3 nodes, I see 10-25 broadcom NIC with RSS/TSF function that seems a good alternative to infinity band for small cluster. As I found that you need a dedicated switch starting from 3 nodes.