3950X effective clock speed @5GHz using PBO - anomaly?

Getting relevant hardware specs out of the way first. Please ask for more details if you’d like them.

CPU: 3950X
Mobo: Aorus Master X570
PSU: Seasonic 850W Platinum
Cooler: Noctua D15 (2 fans)

Event: I (kind of) followed a video from Buildzoid on setting PBO values on Gigabyte X570 boards. I was experimenting with an all-core 4.2GHz overclock and wanted to compare it to PBO as it will operate around 4.2GHz. When I changed the BIOS settings it resulted in PBO working, but it was boosting near, and over, 5GHz on the effective clock in HWiNFO. AFAIK, the voltages and temps were reporting as safe and usable, but keep in mind I’m completely new to OCing and still have much to learn about what’s safe.

I thought this was just a readout error, so I ran Cinebench R20 and my score went from ~9400 to ~11300. I have two videos (with bad audio - sorry) showing two different Cinebench runs. I will upload one to Youtube, but can upload both if requested. I will upload a video of my BIOS settings and can list them here as well if requested.

I also wanted to test my RAM while it was like this and ran a full memory test using the DRAM Calculator for Ryzen. The test finished 100% with no errors. It was during this test the effective clock reported running over 5GHz. I took screenshots of HWiNFO during this test for analysis.

After making the videos, I restarted Windows and PBO worked as it had done last time I tested it. I was getting ~4.2GHz boost and ~9400 in Cinebench R20. I’m assuming this event was an anomaly, but am hoping for an explanation as to what could have happened. I’m also curious to know if this was actually safe for my hardware, or if I could have or potentially could have damaged something. If this is safe, I’m also curious if it’s possible to recreate. I’ve had no luck at this point to recreate it by repeating the steps I took. I saved the BIOS profile before the changes and after, so I have my start and end points preserved.

Cinebench R20 run
BIOS values

Edit:
The LLC setting was left on from the all-core OC. It causes some of my resting voltages to be considerably high, but I’m unsure if it’s unsafe if its not under load. Just to be safe, I turn it off when using PBO. I have alarms set in HWiNFO for high voltages, but these were never triggered during the anomaly, so I didn’t even notice I forgot to turn it off.

1 Like

I’d assume that the reported RDTSC/RDTSCP timer frequency was wrong during the “5GHz” run.
The “Effective Clock” counters can be quite accurate, but they rely on that RDTSC frequency being precisely known.

The RDTSC timer normally runs at 38*BCLK (3800 MHz) on the 3950X, but it could be somehow misconfigured by BIOS (or by Windows, or by some monitoring software).

Did you put the system into the Sleep/Suspend state before the 5GHz run?
It could be that the RDTSC timer frequency was not properly restored on resume (BIOS bug).

Also, it looks to me like the HWINFO values are updating just a bit slower than every 2 seconds (during that 5GHz run).

It wasn’t in sleep before the run. I actually can’t put the system in sleep atm, so that’s a for sure thing.

Thanks for the info. I’ll do some research on RDTSC. It raises a new question for me though: is it possible for the RDTSC to effect CPU performance and/or the Cinebench score? That score definitely happened, but I don’t know how or why atm.

TSC frequency doesn’t affect actual performance, but it may throw off time interval measurements (if the software/OS assumes a TSC frequency that differs from the one actually used).

The Windows system clock is not necessarily running slower - it is periodically refreshed from the on-board RTC (which runs at very low frequencies - but is much more accurate as a “wall clock”).

What seems to point to TSC being misprogrammed is the huge discrepancy between the multiplier (42x) and the “effective clock” (5GHz) across all cores under 100% load with 100MHz BCLK.
Either the multiplier, or BCLK, or the 5GHz figure must be wrong - and given the circumstances here, I would bet that the 5GHz figure is wrong.

Windows 10 will not even use the classic RTC clock for timer interrupts by default - it can use so-called “tickless” mode and set up Local APIC TSC deadline to wake up the core when the TSC reaches a desired value.
So, the wrong TSC frequency may not only affect QueryPerformanceCounter() / RDTSC results, but all relative time measurements as a whole (which could explain seemingly increased performance if you are not using an external stopwatch).

Thanks. Occam’s razor says it was a reporting error, so it’s most likely a reporting error and I think your explanation passes the duck test. As for now, I will call your answer my solution.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.