ASUS X670E ProArt Creator problems

Hello folks!

I’ve just bought a ASUS X670E ProArt Creator board with a Ryzen9 7950X3D fully loaded with the 192Gb (4x 48Gb). I’ve updated the BIOS to the latest before I get start building but then I started seeing the following problems:

  1. The memory is rated at 6000Mhz (Kingston Fury Renegade DDR5 6000Mhz, CL32 KF560C32RSAK2-96). I’ve tried to set the Memory clock to 6000 on the BIOS, but as soon as do it, save and reboot, it don’t boot. It takes a few minutes and then it reboot saying that the system was “unstable” and ask me to go back to the BIOS. Since that was the only change I did, I’ve reverted and it boots fine with 3400Mhz. The highest value I was able to boot was 4800Mhz. The motherboard/CPU/Memory is supposed to support 6000Mhz. Any idea what may be wrong that prevent it to run on the proper speeds?

  2. I’ve connected a RTX 3080 on the first PCIe slot (PCIEX16_1) which on its own, is PCIe 5 x16 but since the GPU is Gen 4, it will downgrade. Which is fine. Running alone there is no problem. However, when I add a Mellanox ConnectX-4 Dual 25Gb (MCX4121A-ACAT) to the second PCIe slot (PCIEX16_2), it boot up to ASUS logo with “press DEL to BIOS” message but never move forward. As soon as I remove that NIC it move forward just fine. I tried to boot just with that NIC, without the GPU, and it still same effect. This card is a PCIe Gen 3 x8. According to the motherboard manual, it should make both x16 slots to have x8 on it when both are used. However, it doesn’t move forward. I’ve tried force PCIe Gen3 on the second slot from the BIOS but no luck either. Any ideas what may be wrong with it? The card is good as I put it on a Dell T7910 and it detects it straight away. Not sure if it is the case but maybe this motherboard don’t like PCIe Gen 3 cards and I need a Gen 4?

Any light on those issues would be appreciated.

Thanks!

  1. You’re way out of spec, https://www.amd.com/en/products/apu/amd-ryzen-9-7950x3d lists
    4x2R at 3600 (true for all AM5 CPUs) you might be able to push it a bit more but 6000 is very likely not going to happen.

  2. Have you enabled 8x/8x mode (I’m pretty sure it’s a BIOS toggle)? Does it boot if you put it in the 3rd PCIe slot? Have you updated BIOS to latest version?

2 Likes

Hello! Thanks for the reply.

Aha! Thanks for pointing that out. I guess the “marketing” material of the motherboard confused me. I didn’t checked the *. When I went to the memory list, it shows only 2 slots indeed at that speed. 4800Mhz is fine and been stable so far.

Correct me plz if I’m wrong but x8/x8 isn’t that a setting for PCIe bifurcation on the same slot?

Nonetheless, the options I have is that Slot 1 must be in “Auto” so Slot 2 can be set to x8 or “RAID” (which is only used for their M.2 Hyper card).

I’ve also left it “hang” for many minutes and after that, the boot proceed. However, it doesn’t detect the PCIe card on the OS.

Yes, but that is effectively the same thing, since the 16 lanes of the first slot are shared with the second.

Is the firmware on your connect-x card updated? Alternatively you could try enabling csm in the bios?

I have no way to know as the machine isn’t detecting it. What I can tell you is that this is not a card defective apparently as I’ve added that card to an T7910 running ESXI and it detects just fine. I have another of those cards which I also tried on the same motherboard. Same results.

It doesn’t let me to enable CSM…

I can’t force X8 on the first slot. The X16 Mode and GPU with M.2 Storage disable the second slot immediately.

If I leave Auto, then I can fix the second slot with X8.

One interesting thing is that if I remove the M.2 which is shared with the 3rd slot and put the NIC on the 3rd slot, it boots.

It shows both ports are Connected on ESXI even tho I have nothing connected there. So I suspect something weird will happen if I leave it at the 3rd slot which is X4.

In other words, there is no way to set the PCIEX16_1 to x8 manually. The options are “Auto Mode”, “PCIE X16 Mode” (which automatically disable the PCIEX16_2), “PCI RAID Mode”, “GPU with M.2 storage” (also disable PCIEX16_2).

If I set Auto, RAID or GPU with storage o PCIEX16_1, then PCIEX16_2 allow me to either “PCIE X8 Mode” or “PCIE RAID Mode”.

I’ve tried all possible combinations no success.

The one way it moved forward and actually detected the card was by put the card on PCIEX16_3, which is a x4 ONLY if I’m not using the M.2 slot on top of it. If I do, it is X2. Even if I don’t use the M.2, X4 it is kind of a bummer as I’ll be using both 25Gb ports on the NIC (one of them as 25g RDMA and other regular 10Gb) so using as X4 theoretically is half the bandwidth/speed as I imagine the NIC would get 2 lanes per NIC port in that case…

Just got the latest FW for the card and started flashing it.

Note that in UEFI it says “N/A”.

After the update, it got a proper UEFI version and I was able to pass the boot screen and linux start booting just fine. So it seems progress right? Not quite…

The boot works fine if I have only the NIC connected. As soon as I add the GPU to the mix, it appears to hang in the kernel at the exact line where it says the mlx5_core found the card:

Trying to figure out if that is something with the linux driver that comes on Ubuntu or something else…

So it seems like the connect-x firmware had no support for WiFi before!

Hmm, which Ubuntu version is that? I think installing using iGPU and then downloading the latest NVIDIA drivers could do the trick?

I hope you meant UEFI :stuck_out_tongue:

According to the spec sheet, the difference of MCX4111A-ACAT (mine) and MCX4111A-ACUT is that the latter comes with UEFI firmware.

After updating the firmware I now have UEFI support.

Regarding the iGPU, I’m booting from it already (forced on BIOS) as I want to have the RTX3080 free to be passthru to a VM. It is actually one of the reasons why I’ve upgraded from AM4 to AM5 as the former doesn’t have an iGPU and prevent me to passthru.

The linux I was testing was Ubuntu 22.04. Just so I could properly run the Mellanox tools for firmware upgrade. Not intended to be the final OS.

So, I’ve installed ESXI 8.0 update 2 on it and guess what? Both the Mellanox card and the RTX3080 was detected and the boot went smoothly! :smiley:

Now the last piece remaining is to try make the onboard 10Gb port to be detected by ESXI. It has only found the 2.5Gb one.

The question now is wether or not because I have a PCIe Gen 3 card on the second slot, the motherboard is making the GPU to work on Gen 3 mode… Can’t find a way on ESXI to get it. Will have to fallback to linux to test out…

It doesn’t. It’s a general feature of bifurcation that it is possible to have different PCIe gens. But specifically, I have the same board, a RTX 3060 and a connect-x 3 card, and here the GPU is on PCIe gen 4 and the NIC on PCIe gen 4 :wink:

Glad to hear that :smiley: was about to bang my head :stuck_out_tongue:

My curiosity :stuck_out_tongue:

Indeed it is detected at Gen 4 x8.

All good, thank you! :slight_smile:

1 Like

Although the motherboard has a 192GB memory max, the 7950X3D, according to the AMD specs, is limited to 128GB. Does your OS recognize 192GB?

i have the same mainboard and 7950x3d.
i am running 192 GB Corrsair RAM @5200 with no Problems.
(5600 RAM, but not stable - @5200 zero Problems)

Yeah it works just fine:

However, I was only able to get it stable at 4800Mhz. If I get more than that it just don’t POST. I’m using 4 Kingston Fury Renegade 6000MHz, DDR5, CL32, XMP KF560C32RSAK2-96.

I wonder if even though the OS is correctly reporting total memory on the motherboard, the CPU is only able to use 128GB of it? The AMD website clearly states 128GB max RAM for the 7950X3D. A failure will only occur if a program tries to access above 128GB, and few programs use that much RAM. Again, just wondering.

I did ran almost full and it works without problems. Created a big Hyper-V VM with 164Gb and it worked just fine. Both Mobo BIOS and OS detect the memory properly.

But nonetheless, I gave up on this platform as it was very bad for homelab using Server OSes as driver support is basically non-existent. I’ve repurposed the CPU for a tiny “PC as console” for living room and I’m selling the mobo and other components and rebuilt the servers with other parts.