I recently picked one of these up. I’m currently running a 12600 non-k with it to test it out. I bought it used, so going through a few checks to make sure everything is working. There is an odd issue with the CPU power connector LEDs remaining illuminated even though I have the 24PIN and 2x EPS connected to the power supply. I can get into the BIOS just fine and it displays all the correct data. It just on a motherboard box right now, and I’m going to try to move it into a case with some drives tonight.
Before I do that, does anyone else have experience with this motherboard? I’ve tried two different PSUs, and the lights remain illuminated as soon as the 24PIN is connected.
Is this still relevant to you? I have this board but I’d have to pull the server out of the rack to check. Could do this on Saturday. I’m running a 12600K and a Seasonic SSP-500JS.
I actually got this motherboard - new - and paired it with a i5-12500 (no suffix) and 128Gb of ECC UDIMM DDR5 RAM (Kingston Server Premier).
I don’t know if I am just super unlucky or if something is really wrong with my set up, but I the original motherboard I bought and the current one, that I got as a replacement, seem to have issues.
Configuration:
ASRockRack W680D4U-2L2T/G5 motherboard
Intel i5-12500 (no suffix)
Fan 1: Noctua NH-U12S redux CPU fan
Fan 2: SuperMicro SC731-i404b chassis fan
RAM: 4x 32GB DDR5 UDIMM ECC memory (KSM48E40BD8KI-32HA) Note:while KSM48E40BD8KI-32HA are indeed not on the ASRockRack W680D4U-2L2T/G5 Memory QVL, KSM48E40BD8KM-32HM is and according to Kingston the former (32HA) replaces the latter (32HM), see this link
PSU: Seasonic TX-650 PSU (650W, 80+ Titanium)
Alternative PSU: FSP FSP400-70AAGGBM PSU (which was included in my SuperMicro SC731-i404b case)
Optional components (not relevant for most issues I am currently facing):
ChiefTec CP-2131SAS – SATA backplate with an active fan
3* WD Ultrastar DC SA620 SATA SSD (2 of them are 480Gb, one is 960Gb) inside that SATA backplate
4* Samsung PM983 NVMe U.2 SSDs, each connected via a OCuLink → U.2 (SuperMicro CBL-SAST-0956)
Issues:
CPU_PLUG1 and DIMM_PLUG1illuminate in red. They illuminate in red even before I turned on the system - just supplying power is enough for them to illuminate in red.
I checked the RAM using Memtest86 v11 Pro (with ECC support) and even after 3 hours (2 passes) the RAM did not have a single error.
Now on the original motherboard I was receiving Correctable Errors en mass, but with the replacement one I have no issues.
I had issues with trying to run the RAM with 4800MT/s (as Kingston specifies) as opposed to the 3600MT/s, that the motherboard recognized it as. I’d get a Dr. Debug Code 63 error (CPU seated?), but after a CMOS reset the issues were gone.
On the current (replacement) motherboard, I am constantly receiving VOLT_VDD2 alerts, stating, that apparently it’s at 1.26V (instead of being lower) and that this is outside of bounds. I thought, that perhaps a faulty PSU was the culprit, but replacing the PSU for the alternative one did not help.
Neither of the fans report their RPM properly - neither on the current replacement motherboard, nor on the one I originally bought. However: the sensors do react to e.g. blocking the fins of the fans with a notification in the BMC as expected.
Random restarts (the most annoying issue). I have no idea what’s causing it, but TrueNAS Scale 24 restarts every 6 to 18 hours or so. Proxmox restarts sometimes, too, but it’s less predictable. I have read something in regards to Watchdog, NICs not having issues not being connected or ZFS on NVMe U.2 SSDs, but I honestly have absolutely no idea what’s causing these. dmesg, journalctl nor the /var/log/... files seem to offer any error, that could’ve been the reason.
Question:
Does anyone know when those LEDs in the picture turn red?
Does anybody else have the same or similar issues?
Has anybody experienced similar issues with a different board and if so: what was the cause and the solution?
Thank you in advance! I am pretty frustrated, because this system should’ve replaced my old i5-3450 rig in June. Now it’s August and it doesn’t look like I’ll have a working replacement before October.
The motherboard IPMI always reports the fans as spinning at 1100 RPM. There, however, is never a change in the reported fan speeds.
I can hear the fans spin up and down but IPMI does not report the changes. I have experimented with various fan settings in IPMI, e.g. Default, Customized, and Manual. The behavior of the reported fan speeds does not change.
Has anyone experienced this issue?
I have also tried IPMITool to get the raw sensor readings. Here is the output:
FAN1 | na | RPM | na | na | na | 100.000 | na | na | na
FAN2 | na | RPM | na | na | na | 100.000 | na | na | na
FAN3 | 1100.000 | RPM | ok | na | na | 100.000 | na | na | na
FAN4 | 1100.000 | RPM | ok | na | na | 100.000 | na | na | na
FAN5 | na | RPM | na | na | na | 100.000 | na | na | na
FAN6 | na | RPM | na | na | na | 100.000 | na | na | na
FAN7 | 1100.000 | RPM | ok | na | na | 100.000 | na | na | na
I have 3 Noctua fans connected to this board as follows:
All fans spin up and down according to an Open Loop Control table profile I had set up. I recently switched to Close Loop Control table, and the fans spin up and down accordingly.
I upgraded the BMC firmware 4.02.0 and Bios version to 21.12. The sensor still reports all fans at 1100 RPM.
The IPMI logs occasionally report the following about each of the fans, with FAN4 being the only one that does not report such behavior:
I actually couldn’t. I have returned the motherboard, the RAM, the CPU and the cooler.
Since I have RMAd my motherboard AND the CPU, I suspect, that the RAM was the culprit. However, after three months of bug fixing I was tired and had no patience left.
If you have random restarts (which you’ll get if the RAM is faulty), that don’t leave any traces in any of the logs, I’d try to replace RAM, CPU and the motherboard in that order. I also replaced the PSU with a Seasonic TX-650 for good measure, but the PSU didn’t solve the issue.
Also note: in order to reset the red LEDs, you should perform a CMOS reset. ASRockRack still didn’t tell me what the LEDs actually show, but I don’t care and won’t ever touch an ASRockRack motherboard ever again.
I hope this helps.
P.s.: my W680D4U-2L2T/G5 didn’t show proper fan speeds either - it was always 1100RPM (if on very early BIOS) or 2200RPM (on BIOS >20.03). But detecting a fan, that doesn’t spin and raising an event did work.
Luckily I moved on to an EPYC 8024P and a Gigabyte ME03-CE1 motherboard. It seems rock solid and consumes just a little more (when enabling most of the power saving features) than my W680 set up. Also the edac-util properly handles DDR5 RDIMMs, but didn’t handle DDR5 UDIMMs properly.
Edit: one more thing, that I missed in regards to RAM:
The RAM is only supposed to run at 4800MT/s with 1 DIMM per Channel (1DPC). With 2DPC the speed drops to 3600MT/s. Either way regardless of the speed the system just wasn’t stable.
The most funny part is that we have almost identical configuration except that I’ve got KSM48E40BD8KM-32HM and a diffrent PSU.
Initially i did have some problems with RAM but i just reset BIOS completely to defaults and CMOS as well and after enabled ECC option in BIOS, than the problems gone so far.
Anyhow, in BMC I am not getting any alerts neither random restarts (running ESXI for couple days). Also FAN1 (cpu) is 4600 RPM and FAN 2,3,4 (exhaust fans) are 3000-3200 RPM.
But i still have red LED’s for CPU and RAM.
and now don’t know what to do…as i don’t have RMAd lol…
Difficult to suggest something. I’d probably persuade ASRockRack and your seller (if it’s a company) to ask ASRockRack what these red LEDs actually mean (I believe they’re CPU_PLUG1 and DIMM_PLUG1) as they are not mentioned anywhere in the manual from what I know.
Also keep a close eye on random restarts, perhaps activate system events on restarts (meaning, that events will be inserted into the SELog on every reboot) and check for them regularly in your BMC.
In my case I should’ve saved myself from running Memtest86+ v11 Pro (with ECC features) as it did not show errors on either of the boards. On the first motherboard though the RAM did report correctable errors in the BMC (as critical events).
TL;DR: I recommend you…
… enable SELog for restarts - or keep an eye on whether a restart has happened (as in my case it did not leave any logs in the journal or anywhere else)
… ask ASRockRack (and your seller) to clarify what these red LEDs actually mean
… keep a close eye on sensors (VDD2 or similar)
My system usually crashed in 24-48 hours so if you have a longer uptime, I’d dare think you’re safe, but YMMV.
Alternatively - if you can afford it or try it, try a stick of any DDR5 UDIMM RAM and see if resetting the CMOS after that (otherwise the red LEDs do not even attempt to reset) makes the red light go away. This way you’ll exclude the RAM as the reason.
If your system runs fine for days or weeks, you should hopefully be fine.
Good luck man! It’s a great system if / once it works. If mine had been stable, I would not have picked the EPYC system I “upgraded” to.