Pro WS WRX90E-SAGE SE, Code 62/VGA LED error with 7 Hyper M.2 Gen 5 using HBM Enabled NVME SSDs

Hello All. i am having an issue that I have been unable to troubleshoot my way out of on my own. I am hoping that someone here might have some suggestions.

I have purchased the ASUS Pro WS WRX90E-SAGE SE motherboard and am experiencing an issue when using multiple Hyper M.2 x16 Gen5 cards with non-DRAM enabled SSDs.

According to documentation for both devices, I should be able to install 30 PCIe 4.0 SSDs across the 7 Hyper M.2 x16 Gen5 cards, plus the 4x onboard M.2 slots. I am able to install 30 of any PCIe 4.0 SSD that includes DRAM on the SSD (EX: Micron MP600, Samsung 980 Pro, ETC.). However, if I install any model of PCIe 4.0 SSDs that DO NOT include DRAM and utilize Host Memory Buffer (HMB), the system will hard-lock with an error during POST once there are more than 14 drives installed in any configuration/slot. This does not occur on the first boot after installing the SSDs, but will occur after the first cold/warm reboot after installing the drives.

Once the system is hard-locked, it requires a full BIOS Recovery (BIOS Flashback) process in order to get it into a working state again. It doesn’t matter what slots/hyper cards/etc the SSDs are installed into. As soon as I install the 15th HMB enabled drive into any slot, the client will fail to POST with the VGA LED and CODE 62 on the LED panel. No amount of removing/re-seating hardware and re-booting that I have performed will get the client back into a working state. Once the BIOS Recovery has been performed and only 14 (or less) HMB enabled SSDs are installed, it POSTs again without issue. Without the BIOS Recovery process, the client will POST almost immediately to the VGA LED/CODE 62 on every boot cycle.

Supported M.2 Device Layout
Onboard M.2: 4x PCIe 4.0
Hyper M.2 1-5: 20x PCIe 4.0 (5x 4DUT per Card)
Hyper M.2 6: 2x PCIe 4.0 (PCIe_6 limited to only x8)
Hyper M.2 7: 4x PCIe 4.0 (1x 4DUT per Card)

ASUS Pro WS WRX90E-SAGE SE
AMD Ryzen Threadripper PRO 7665WX
32GB (2x16GB), Samsung M321R2GA3PB0-CWMXJ
Add-In Cards:
7x Hyper M.2 Gen5

Here are a list of BIOS settings that I have enabled/disabled to try to address this issue.

Disabled:
Redfish Support
Resizable Bar
SR-IOV
HD AUDIO
10GB LAN1
10GB LAN2
Power Delivery S5
WIFI Controller
Bluetooth Controller
Serial Port
USB4 Controller
FAST BOOT

Enabled:
PCIe ARI Support
PCIe ARI Enumeration
PCIe Ten Bit Tag Support

Issues persists across all released BIOS versions, up to 1203.

I wonder if your problem is actually related to HBM as this is usually not a BIOS, but an OS/driver thing. Also, you may have overlooked in your calculation that PCI slot 6 has only x4+x4 bifurcation. So you’d have to remove 2 SSDs from that card. While you don’t need to enable RAID mode for the bifurcation with the original ASUS Hyper-Card, this might be necessary for 3rd-party stuff. BIOS→Advanced→ Onboard Device Configuration. RAID does not necessarily mean an actual RAID configuration, but enables bifurcation for that slot.

Thank you for your feedback, Wullewack.

I wonder if your problem is actually related to HBM as this is usually not a BIOS, but an OS/driver thing.
JL: I agree, but it’s the only single factor that I have been able to narrow this issue down too. When using any model of SSD that includes DRAM, I am able to follow the guidelines from the manuals and get all 30 drives up and running correctly. Without DRAM, anything over 14 drives will create the issue, leading to a BIOS recovery and reset.

Also, you may have overlooked in your calculation that PCI slot 6 has only x4+x4 bifurcation. So you’d have to remove 2 SSDs from that card.
JL: This is how Slot6 is configured now. I only run 2 SSDs in the Hyper M.2 in this slot. Though typically, I use this slot for a NIC card, since the onboard 10Gb doesn’t support WOL.

While you don’t need to enable RAID mode for the bifurcation with the original ASUS Hyper-Card, this might be necessary for 3rd-party stuff. BIOS→Advanced→ Onboard Device Configuration. RAID does not necessarily mean an actual RAID configuration, but enables bifurcation for that slot.
JL: I have enabled the RAID mode on all slots, as it is required for the Gen5 cards.

I can confirm this is a “BIOS thing” and not directly related to the fact that OP installed a plethora of SSDs to the detriment of the board.

I was able, to my chagrin, produce these same results while engaged in a misguided attempt to successfully enable Secure Launch using an ASUS Pro WS WRX90E-SAGE SE (which I am convinced is not possible with the board in it’s current state).

Here’s how you can “soft lock” your WRX90E-SAGE SE, necessitating a firmware reload via BIOS Flashback.

  1. Baseline Configuration and Preparation
  • Enable VBS, HVCI, and Credential Guard via Machine Group Policy (no UEFI Lock)

  • Verify all virtualization prerequisites (UEFI Mode, Secure Boot, CSM Disabled, SVM, IOMMU, Transparent Memory Encryption [TSME])

  • Result: Credential Guard and VBS running successfully, Kernel DMA Protection enabled and functioning.

  1. Initial Secure Launch Configuration
  • Enable “Secure Launch” via Machine Group Policy

  • Confirm msinfo32 shows: Secure Launch as Configured

  • IOMMU set to Enabled

  • Pre-boot DMA Protection and Kernel DMA Protection Indicator to set to Enabled

  • DRTM Virtual Device Support and DRTM Memory Reservation set to Enabled

  • SEV-SNP Support set to Enabled

  • SMEE set to Enabled

  • SVM Enable set to Enabled

  • SVM Lock left on Auto

  • SVM-SNP Support set to Enabled

  • SNP Memory (RMP Table) Coverage initially left on Auto

Results:

  • System boots, Secure Launch not running

  • AMD DRTM Boot Device driver stub visible in Device Manager

  • Kernel-Boot Event 208 and 235 shows Measured Boot failed (TPM Attestation = Not Ready)

  • Windows Event 51/45 logs show successful key provisioning/sealing, but TPM access fails

  1. Progressive Activation of SEV-SNP Memory Coverage
  • Reboot with SNP Memory (RMP Table) Coverage Enabled

  • POST successfully completes

  • Windows boots with Event 208/235, TPM Attestation Not Ready

  • Observation: Secure Launch still not running

  1. Second Warm Reboot with Full Secure Launch Stack
  • No change since last successful boot

Results:

  • System fails to POST, halts with POST code 62 (white light shown)

  • Requires BIOS Flashback recovery to restore POST

  • POST failure can be confirmed reproducible on subsequent configuration re-test

BTW, I formally reported this BIOS bug to ASUS about 6 months ago and they were not impressed enough to do anything.

I wonder if and what these two phenomena have in common and whether they really lead to the same error or just to the same error code because in both cases something (else) is overwritten in the start configuration.

Since the OP does not have a graphics card installed, the use of more than 14 HBM SSDs may somehow interfere with the management system and Code 62 could simply mean that BIOS does not find a VGA to initialise or use. Have you tried a dedicated VGA? Of course, just speculation.

Have you tried a dedicated VGA?
JL: Yes, I have tried disabling the onboard VGA (Via switch on MB) and installing a dedicated GPU. This did not resolve the Code 62 issue.

Anyway, if you are forced to do a full BIOS Recovery, something fatal must have happend to the configuration and you should open a support ticket with ASUS. Even if the Board/BIOS does not support more than 14 HBMs or if this would require some other settings, doing so should not virtually brick the board.

32GB memory seems a bit light for that kind of set up?