Power supplies that are ATX 3.0 spec should be able to handle power excursions much better than the old ATX 2.x PSUs. The difference is like 125% vs 150%/200% power excursion tolerance between the two specs, depending on if there are native 12VHPWR ports on the 3.0 PSU or not.
The VDDQ voltage preference makes me think that the IO die is sensitive to voltage like it was on consumer Ryzen, where extra voltage would actually reduce memory stability fairly early on in the voltage ramp.
Those are pretty good memory bandwidth numbers considering the 8 channel threaddrippers are âonlyâ achieving ~230-260GB/s copy with double the number of channels.
There definitely seems to be a bottleneck in the infinity fabric for high memory bandwidth numbers, Intel is running into this problem too with itâs mesh fabric, where the 4 channel Xeons have better memory scaling than the 8 channel variants.
Iâm not sure what is going on with the scheduling, but I donât trust Windows to do a good job at it. Linux is consistently producing better solve times than
Windows on the same hardware, usually in the neighborhood of 10-30% faster.
The CFD benchmark has less single threaded and memory intensive sections in it than the CFD-EM benchmark, so the later is going to be trickier to schedule for with all the context switching going on, but even then it should never be using two threads on the same core.
This seems like a situation in which setting NPS in the bios to 2 or 4 might improve performance because itâll force Windows to spread out processes more evenly.
Hi there. I got yesterday the exact same threadripper cpu with the noctua cooler, same motherboard and ram kit set to a new built. Set a be quiet pro 1300W psu. It has booted, windows 11 installed. However, cannot seem to get all the RAM, first boot with all on, âonly 64Gbâ. Then tried different RAM configurations and managed to get up to 96Gb, and also up to 128Gb, so basically only 2/4 of the memory channels in the same motherboard side working. Anyone has any experience with this to share? Got three technicians headaches. The 8 different RAM sticks each seem to work separately on the working slots, so not a kit issueâŚ
How certain are you that the CPU is making good enough contact with the socket pins? It seems like this socket is fairly sensitive to non ideal socket mounting pressures.
Also it seems like some people have been getting motherboards that need not only the heatsink mounting screws tightened to the proper torque but the actual socket bracketâs torx screws that mate with the back plate too.
Hi everyone!
Iâm planning on buying & building the following specs in the following 2 months for a CFD & FEM workstation.
The specs are:
Operating System: Windows 11 Professional Edition
CPU Type: AMD Ryzen Threadripper PRO 7975WX 32-Cores
Motherboard: Asus Pro WS WRX90E-SAGE SE
Memory: 8xKingston KF560R32RBEK8-128 6000 (128GB RAM)
Videocard: NVIDIA RTX 5000 Ada Generation
NVME: SK Hynix Platinum P51 2TB X2
CPU Cooler: ???
Case: ???
PSU: X Mighty Platinum 2000 â Will upgrade to X Mighty Platinum 2800 when released
I would like any additional advice you might have on that build. Especially on which CPU cooler is best and which case do you recommend. The Intended use is CFD and FEM analysis for aerospace applications.
Iâve been building and modifying HPC servers and workstations for the past 6 years. But, It is a very expensive build and Iâm paying, so Iâm looking for any advice I can get before purchasing.
For reference, after several months finally⌠CPU and Motherboard deemed within DOA⌠Received money back. No resolution though yet to this, ordered new parts from another supplier.
Good thing I kept reading, was going to post it is most likely PSU. I had an identical noise on an Asus X299 workstation and found a EPS 8 pin melting on the PSU side
It was a silverstone PSU that died on x299 rig. I wonât be going back. Was going to give the corsair a run, but maybe should lean Seasonic instead. Everything still running well?
Yep, everything is still running smoothly though I dropped windows and installed linux. Also added a nvidia 4000 SFF as the primary gpu and using the 6000 purely for compute
Just a little update post on changes the machine is undergoing and a new thing to check if you are having issues.
New thing (for me) to check if you have issues.
PWM Fan breakout boards.
I have been having issues since I built the machine when it comes to rebooting successfully. This wasnât a major issue but still annoying when it came time to load a new kernel or something.
Long story short, while rewiring the fans in the case to support the AIO cooler I installed, I disconnected the breakout board and boom! Reboots were suddenly working normally.
Nvidia 4000 SFF
256GB kit to a v-color 768GB kit
Replaced the Noctua cooler with a Silverstone AIO cooler
Air duct to help cool the ram (original 3d model provided by @wendell , printed locally)
ASUS hyper m.2 x16 gen5 card
4 x Crucial T705 4 TB NVME
Now I am just waiting for some new GPUâs with hopefully a major VRAM increase.
I think it makes the back fan quieter if anything. Already up to a v0.2 tho and will probably do a v0.3. Alignment of the inlets are better in the current version but not perfect and want to extend them further over the sticks
Pardon the dust with cable management but here is the v0.2, added some air channeling on the front to deflect some of the incoming front fan anir and disperse it over the memory a bit more. This is a few degrees more efficient than the original design.
If anyone one is a fluid dynamics wiz and has pointers, Iâm all ears.
Hrmm just had another thought, I could extend just the blades on the inlet and add mounting points for some tiny like 80mm fans and point it at the ram. Now to see if there are any low noise fans in that general size range
If you give me the dimensions to the planes for the fan mount and RAM outlets I can probably run one of those topological optimizations for flow that generates the really weird, natural looking shapes. I havenât actually done one before but Iâve seen colleagues do it so how hard could it be.