Ryzen 9000 Testing And Oddities

Figured I’d start a thread to document what I end up finding. Some of this will be colored by the fact the RX 580 I currently have in the 9700X system is not my “original” XFX RX 580 as it now lives with the original owner of the Asus variant I now have (MP600 + heatsink created an interference fit on the Asrock B550 I threw their way with the Asus card, so the XFX one stayed 600mi away), and by way of the 9700X system currently using a surely suboptimal mixed-device storage setup and an install of Fedora 40 KDE vs my usual Gnome. I will be toying around with other configurations, including thieving the boot drive out of my 5950X system and moving the 6700 to the 9700X, and possibly toying with coolers other than the Wraith Stealth currently on it. (edit for misspeak/confusion on my part: It’s a Spire, not a Stealth, the Stealth is the ultrashort variant, the Spire is the tall one that intermittently got slugs. Still, not bad for something that was icw a 1500X back in 2017)

So anyway, onto “how it went”. To start, I knew I was getting into some amount of trouble with 9000 compatibility based on the Microcenter reviews of the combo; the B650 Gaming X AX V2 ships with a BIOS that doesn’t support 9000 OOB, requiring a flashback. So I assembled the system, FAT32’d a USB stick, and dropped the extracted folder into said USB stick. From a powered-off state, hitting the BIOS flashback button would blink everything on momentarily and then shut back down. Tried various RAM seating methods etc to see if it would allow POST, no dice, resort to Google. Found a Reddit thread discussing the issue with mention that they had success renaming the BIOS file to GIGABYTE.bin and leaving it alone on the drive. Tried it, got das blinken lite, and then got POST.

Loaded EXPO, booted the Fedora install I had on the drives while on onboard graphics. It worked, it updated, it ran cooler under multicore workloads than lighter threading or even just desktop bursts (thanks, 65W config). 2CU IGPU seems to be absolutely nothing to write home about, but it does function mostly acceptably as a display adapter. Made some PBO adjustments, ran the Comsol benchmark found here elsewhere, got some results. Okay, well, we have a 580 unhoused, so let’s throw that in.

First result: Sum Ting Wong. Something like a 30-50% performance regression versus what I’m used to in multiple games (35-40fps at settings I usually see roughly 60). Strange! And not terribly dissimilar to what I was seeing with 2x XFX 580s installed and running games on the one in a “bad” (4x) slot. Also Steam didn’t want to render anything until I hit it with steam --reset from terminal; this was with HDMI connected to the GPU rather than the iGPU output on the motherboard. Swapping them around fixed the performance issue (oddly…) and brought it generally in-line with expected, though desktop animations seemed to kinda suck more with them being run on the iGPU than the 580. I should put some more time into this, because I can’t think of any compelling reason I’d see such a hammering on gaming performance by having the dGPU connected directly to the display.

So, I will, but I’m also toying with CO/CS/PBO settings a bit while thermally restricted by the cute little Stealth (because the Stealth makes sense for a potential SFF-type build being not a lot taller than a AIO block). One thing at a time, I have a month to beat myself soundly enough to make me keep it.

E1: Force disabling the iGPU entirely appears to have resolved the Steam launch issue, and performance saw an uptick, but it’s still down >20% versus the other 580 combined with my 5950X. Time to bench the 6700 + 5950X, then swap cards around (gonna be fun fitting that footlong into the case I have the 9700X in lol).

1 Like

Cool to see other folks using AMD R9 9000 series chips for gaming and linux!

I’m happy with my 9950X so far. PBO did boost some all core benchmarks, but I turned it off for daily use to save power/heat as my current games FPS exceed the monitor refresh anyway.

Flashback is great! My ASRock Taichi is similar in that you format the USB drive FAT32 and rename the file to CREATIVE.ROM, then it worked like a charm.

Oh I just saw your Edit 1 regarding iGPU. I assume your discrete graphics card, RX 580, would be faster then the integrated graphics? What is the output of xrandr -q to confirm which card/port/rate is connected?

I’m on nvidia and know nvidia-smi and nvtop. nvtop shows my integrated radeon graphics stats too actually. What is the AMD equivalent e.g. rocm-smi or rocminfo etc, might be useful to track GPU utilization and temp.

Finally, what driver are you running? I used an AMD GPU years ago, but recall there being both a proprietary driver and open source mesa driver. There may be even 3 to choose from now e.g. amdgpu-pro, amdvlk, and mesa radv ??

Anyway, curious to hear how it goes for you! Cheers and have fun tweaking your new build!

1 Like

standard protocol

a tech did that last night here in literally 3 minutes: skill issue

tl;dr: 10 year old GPU in bleeding edge linux distro (Fedora) with bleeding edge CPU (9700x) does not behave well

1 Like

Correct.

Also correct; first Gigabyte board I’ve ever had the displeasure of dealing with, and first time upgrading a board that didn’t have support for the only CPU I had on hand. Could’ve skipped spending 2.5 minutes of my 3 minute allowance reseating RAM, oh no.

Operator stupidity rather than an actual hardware or software problem. “Hey, this card has existed for 7 years, and hasn’t been used in about 5, maybe I should open it” wasn’t a thought that crossed my mind until after I noticed how cool it was running.

Yeah. Cool. Bad paste. Running cool. 62C with unlocked power limits. Anyway, repasted it and confirmed that Cyberpunk’s bump to 2.13 is a nothingburger for the benchmark at <1% variance. It also runs 10+C hotter now because it’s actually utilizing the power budget.

PBO is interesting, because it doesn’t necessarily mean you have to run it with power limits that cause temperature rise. Actually that’s something that bugs the hell out of me with how everyone testing PBO (and I do mean everyone) has been doing it; whack the system with MB limits and cry about high temperatures as if that isn’t intended behavior when you allow a PPT limit of 400W, not that most chips will ever get there.

The 580 even in its ailing state was at least 8x faster than onboard (and now nearer 10x, at ~6fps → 55). Now I just need to confirm the pinouts on the EVGA PSU match the XFX cables I have available to toy with the 6700 that’s plagued me with HDMI audio trouble since Day Zero, since the people who set it up as 1500X/B350 Prime/RX 580 left the secondary PCIe cable out since the 580 only wants a single 8-pin. Fun to think about that even being a potential issue, but apparently a few companies have been known to change the PSU-side pinout on like-for-like replacements and not tell anyone until they plug their existing cables back in.

3 Likes

Well, I think this is the end of the line for today… I ran the 6700 at the exact settings I had been running the 580 at before I got the 6700 back for the second time, which were also the settings I /was/ testing the 580 at in the 9700X system. Well, one difference that I’ve now noticed (wider than default FOV on 5950X system), and I’ll correct it, but uh…

118.48 avg/105.08 min/140.45 max for the 5950X; Fedora 40 Gnome, had Firefox in the background, has PBO+CO, has 3800 18-22-22-44 RAM settings, works generally quite nicely. Has two M.2 drives, both 1TB, one Gen 3 SanDisk that’s had a failure complaint for a couple years now and thus is used as an EXT4 game storage drive, and a WD SN850X where the Fedora install lives. Cyberpunk is on the Gen 3, tested it on both, made no difference.

Meanwhile the 9700X… It has to be a storage bottleneck, because nothing else makes a lick of sense. It’s running a 2TB HDD + 120gb SSD that got BTRFS merged when I set up Fedora on the B350/3900X/32gb 3200 setup it landed at before I bought the AM5 stuff. Anyway, the averages stayed comfortably below the minimums from the 5950X setup.

That said, whatever is negatively affecting the 9700X seems to be consistently affecting it negatively, because it did respond by the expected amount overclocking the 6700 to match the spec I ran it at in the 5950X; at dead default it got ~92fps, power limit bump found 96, and then tweaking memory speed, clock limit, and undervolting it found the last 2 or so. Settings are specifically 4K Low + FSR Ultra Perf, and FOV was at 90 for the 5950X run. Reran the 9700X with FOV 90 and it lost 3 frames as mostly expected.

1 Like

Have you tried benchmarking the CPU itself and comparing the results to Phoronix benchmarks, other people’s geekbench results etc?

You should start with the most basic, least amount of variables kind of scenarios and then move on to gaming on Linux tests.

1 Like

Well, yes.

And it’s generally quite favorable. It actually thoroughly trounces my 5950X in Geekbench for an example, at 3615/18362 to 2349/14407 (both run at nearly the same time). The apparent jump to gaming scenarios is largely down to that being the intended use case, because everything points to “it should be better, considerably better”.

So, swapped the drives over, and got some result.

After moving the drives in, I couldn’t actually get into BIOS for unknown reasons; it would boot fine and run fine, but ReBAR had shut itself off. If I had to hazard a guess, the hang on entering BIOS was a combination of too aggressive CO and “OC headroom” settings, but not entirely certain on that.

Regardless, ran the 2077 benchmark again just to see. Improved to ~107 average, but since I couldn’t get into the BIOS to fix the ReBAR issue, I reset CMOS and went back to “just” EXPO enabled. Steam still hated having multiple GPUs, forcing the reset command any time I wished to launch it, but feeding it that got my first improvement so far, to ~121 average. Cutting the iGPU made a small difference, but nothing to write home about (half a frame or thereabouts) other than fixing the Steam launch problem.

So I reran Geekbench just to see what my evidently unstable tweaks had done for it; 3.2% ST and 7.2% MT (got 3502/17127, which is still considerably higher than many of the review examples), so not a whole lot of change versus “one click”. The 200mhz “auto OC” bump handily explains the jump in ST, but I’m fairly unimpressed by the MT changes given the 88->142 PPT change; methinks the 65W TDP is the right place to have this chip.

Edit: Spent a large portion of today toying with CO/CS. Landed at what seems to be a reasonably comfortable suite of settings that Just Work™ in both senses, i.e. they work, but only by a couple numbers. Stayed down at 88W PPT, got back to 3626/18120 in Geekbench despite the reduced power limit. Not sure if I have a golden 9700X or what though, because the vast majority of results are >5% off of what I got at “stock”, even those recorded in Linux. Got considerably less of a bump in Geekbench than CBR20, and changed approximately nothing in gaming to no real surprise in any regard. 10/10 would waste my time again, and probably will. The “further gains” department is vanishingly small given there’s /maybe/ another 50mhz in peak boost clock to be had ST, but it might be worth trying for. Maybe. Doubt it. Now it’s just time to check the rest of this board’s functionality; stuff like audio which is at least reasonably important to me.

Geekbench 6 “final”

Well, erm, no, but also sure, let’s go with that.

I put the 3900X/B350 setup back in and ran it with the same set of drives I thought were causing a storage choke on the 9700X. +~10fps on the same benchmark I believed to have been storage bound, though still down 10 from the 5950X run and down 14 from the best I managed from the 9700X using M.2 storage. So we’re at a best case scenario of +4fps for the 9700X against the 5950X and +14 against the 3900X. That’s not promising.

1 Like