Asus Pro WRX80E-SAGE-WIFI - Sata drives invisible on Ubuntu

I have a Asus Pro WRX80E-SAGE-WIFI with both NVME and SATA drive installed.

If i boot to windows i see all the drives.
If i boot to Ubuntu 20.04 or 22.04 i do not see the SATA drives only the NVME.

the sata controller is marked as UNCLAIMED “product: ASM1062 Serial ATA Controller”

Does anyone have any advice i know these boards are uncommon for anyone to have had the same experience?

What BIOS version and BMC FW versions are you running? I ran into this issue when I updated to the latest BIOS (Version 1003). Went all the way back to 0211 and it worked again. Same exact issue on Ubuntu 20.04 and 22.04. Worked in Fedora 36 and Windows, but any Debian variant I threw at it was a no go

1 Like

I’m running debian 11 with backports enable and didn’t notice this issue when I’ve updated to BIOS version 1003 and FW version 1.17.0.

rgysi@zephir:~/$ lspci | grep SATA
27:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 02)
28:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 02)
2c:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
rgysi@zephir:~/$

rgysi@zephir:~$ uname -a
Linux zephir 5.15.0-0.bpo.3-amd64 #1 SMP Debian 5.15.15-2~bpo11+1 (2022-02-03) x86_64 GNU/Linux
rgysi@zephir:~$

I’m having the same issue (Ubuntu doesn’t see any of the attached SATA drives) with Ubuntu 20.04. I just updated to BIOS version 1003 when I was updating the BMC firmware when the issue popped up. Everything had been running fine up until then. I tried downgrading the BIOS using the IPMI interface as well as the EZ Flash tool, both of which say the firmware file is invalid/doesn’t pass the checksum validation. I tried all published versions of the firmware posted on ASUS’s site and none of them worked. When I ran the lspci | grep SATA command my results were the same as gysi’s. What method did you use to downgrade your firmware? Do you have any suggestions?

Created an account just so that i could reply in this thread :slight_smile:
I had the exact same issue - 3 SATA drives, 1 NvME (M.2) not being seen by any linux distro (Ubuntu 18,20,22 Debian, etc). Proxmox and Unraid all failed naturally. The only OSes that could see it were ESXI and Windows.
I tried and verified all different combinations of BIOS settings for this board and nothing would work. Heck i even pulled the NvME drive and installed a distro then put it back in - hoping that some sort of ‘magic’ would happen…nope.
I then stumbled on this thread about BIOS…downloaded 0701 and tried to flash via EZ-Flash…failed, tried many times, using different USB, re-downloaded the file multiple times, etc…all would fail.
Tried to flash via IPMI…same issue - failed. The only thing that worked was to use the Flashback button.
copied the BIOS file onto a USB per ASUS instructions, put the drive in the labeled port on the back, pressed the button for 3 secs until the blue light turned on and started blinking…then held my breath. Light stopped blinking, pulled the drive and booted.
System powered on but failed to POST!..freaked out a little, but then it rebootd on its own, and on the second time it POSTED…woohoo…!!
Installed Proxmox 7.2 and was able to see all of the drives.
Naturally, in the bios, I didn’t use RAID and made sure that SATA was set to AHCI.
Hopefully this helps someone else down the road, it is a very niche market and unique issue. No idea how folks got ‘lucky’ and had this work out of the box.

For posterity: I was able to fix this by adding

pci=nommconf

to the syslinux configuration on my unraid setup. This allowed me to keep the newest firmware and fix the issue described above. It also appears to work in ubuntu per this thread [Solved] Ubuntu 22.04 on Asus WRX80E Sage not detecting USB and M.2 - #3 by zzsf

1 Like

I also ran into this issue and the workaround via pci=nommconf worked, so i could boot again via my USB/NVME Adapter.

But i realized that the IOMMU Groups are “bad”.
The VGA Adapters are fine with seperate groups, but the most of the onboard devices are grouped in same group.

i think pci=nommconf disables AER/ACS capabilites, can anyone confirm?

greetz nik

I actually have a similar issue. I thought I had fixed everything at first but then one of my existing VMs wouldn’t boot because one of my three GPUs will now not pass through. As soon as I take pci=nommconf out of my syslinux configuration and boot all the VMs work again (but no sata drives). I actually have two machines with this mother board and the problem is the same on both.

do you use Proxmox?

sometimes i had similar problems - if I started another VM/GPU, the other one could also be started again. Have now updated the kernel to Linux 6.0 (pve-edge-kernel), this has almost completely fixed these problems.