Jack's 10920X build log

UPDATE: Now a build log, originally a questions thread


Officially have a 10920X ready for pickup on the 26th, X299 Gaming 7 Pro motherboard is en route from Despacito land, and I want some burning questions answered for the build, since I have not yet purchased the memory waiting for that price drop that was predicted:

  • Micron E-die 3200mhz CL16 or 3600mhz CL16 in 4x16GB? Would memory speed increase IMC stress or cause incompatibility so that it won’t boot? I specifically kept Micron E-die in mind since it’s easy on the IMC, and I’m dual ranking a single stick.

  • What’s the safest positive voltage offset for Mesh to clock 31x or 32x for the mesh for 24/7 usage? Would higher mesh make things tricky with high memory speed, making me NOT want to go for a 3600 kit?

  • What’s the best way to stability test Mesh on Linux? In general I can’t really find Mesh OC stability testing that’s well documented, let alone for Linux. I know stressapptest exists for something better than HCI Memtest to test memory stability, but I’m not sure if that taxes the mesh to find instability…

  • Would 3200 CL15 be possible on a Micron E-die kit clocked at 3600 CL16?

  • Could I initialize a Titan Ridge card on older versions of Windows? Gigabyte surprisingly provided Windows 7 support for X299 in their drivers section for my mobo.

  • Were voltage offsets for core fixed with the latest BIOS updates for the 10 series HEDT CPUs? I do recall early adopters unable to get offsets working properly.

  • (this was prompted by Buildzoid) I’m on a EVGA CLC 280. Would removing an AVX offset on this 12 core be enough to be beyond the cooling capacity of this AIO? FFmpeg would use higher end AVX instructions.

  • Is it possible to just increase the 2 core ratio to simply boost low core workloads to reach a magical 5Ghz on 1 core using stock voltage offsets?

  • Does Turbo Boost 3.0 even work on Linux? Does it work when pinning cores for VFIO?

  • Finally, how does one go about disabling TSX instructions because of Zombieload?

If anyone can help me source a SSDT or DSDT for the Gaming 7 Pro, that would also be appreciated since I’m migrating my Sierra 10.12.6 install to Cascade Lake-X. Or if anyone can help me migrate from Clover to OpenCore, that would be appreciated too.

The other concern is apparently the official Thunderbolt support ONLY WORKS IN ONE SLOT for this particular motherboard. That slot is directly under the 1660 Ti in my build config. So should I not bother with Titan Ridge in that case? Can I try to find a low profile riser? This would potentially involve modifying the Air 540 case… Is this also the case with other Gigabyte boards? Is this because the Titan Ridge controllers need to be through the X299 chipset? If so, that makes it even more confusing why the X299 Designare EX drops lanes in the primary slot when fully populated. If it’s a BIOS soft lock, would modding the lock out result in Thunderbolt settings being exposed regardless of PCI-E slot?

Would something like this have enough flex in the cable to fit under my 1660 Ti?

https://www.newegg.com/p/1YU-00XZ-000E9

Acquired my 10920X. Be careful when buying to see if any of the tamper seals have had their adhesive undo itself. Long times in hot warehouses can do that.

Damn dude, that’s gonna be a slick build.

Depends on the GPU…

Yeah, the rest of it is just the same parts, but Mobo, RAM and CPU is getting an upgrade.

1 Like

So I’m leaning towards 3200mhz because not even 3000 CL15 at 32GB per DIMM is stable, so IMC stress is a real thing, so I’m using less stressful memory. 3600Mhz is more for dual channel systems rather than quad channel systems.

Still unsure if there’s a SSDT or DSDT for this board, and unsure if Thunderbolt is going to be a primary thing I’m going to be using daily. Could be the first to test a new Ultrastudio 4K in Linux, but got camera priorities for 2021.

For those unaware, Blackmagic now has a Thunderbolt 4K 60p 12G-SDI and HDMI 2.0 capture card in the Ultrastudio 4K Mini. It claims Linux support:

https://www.blackmagicdesign.com/products/ultrastudio/techspecs/W-DLUS-11

Even their new Ultrastudio 3G capture cards claim Linux support:

https://www.blackmagicdesign.com/products/ultrastudio/techspecs/W-DLUS-12

From what I’ve seen, Blackmagic does quite well with Linux support. While they don’t upstream their drivers, they do the second best thing and provide convenient, linux-friendly packaging for it.

So the CPU and Motherboard are in, but unfortunately my RAM purchase is delayed and I only have Samsung B-Die single rank 3200 CL14 to work with, so that might not stress the IMC the same way as 4x16GB Micron E-die so I can’t do Mesh OCs right now.

Windows 10 AME seems like a good candidate for a “no pfsense required” VM setup but I can fall back to Windows 8.1 if need be.

Still no word on SSDT and DSDTs for my board. Rampagedev is dead, so I have no clue how to get it all going if not having a SSDT and DSDT means my macOS installs are impossible to migrate. This is extremely important for me for the Adobe Suite.

Any word on SSDT and DSDTs for my board? Still got nothing and am scared to death that I will screw up my Hackintosh drive migration from X79 to X299.

Still nothing? Would really appreciate help with DSDTs and SSDTs so I could feel more sane about installing High Sierra on my 970 Pro 1TB I just bought.

So, it seems things have changed dramatically. Tonymacx86 and their solutions are now shunned in the community as they’re closed source and could potentially include anything, including malicious stuff. Clover is also shunned since it’s now in Russian ownership.

Now the only valid guide is the OpenCore guide. While the guide does cover SSDTs, DSDTs still have no clear direction in what exactly to do. and NOBODY I’ve spoken with has any clue on how to create DSDT patches so that Thunderbolt and the rest of the ACPI plumbing works. Absolutely nobody. Not a single person on earth knows DSDT patching to my knowledge.

This will continue to be a stalemate, and with social gathering restrictions extended, nobody can guide me through this process where they can intervene in person if they know something I don’t. There’s no way I can transition to X299 right now.

This repo is based on an ASUS board and Clover, but I need it adapted for my Gigabyte board and OpenCore… Nobody has the effort to do that, and I am SOL.

Hardware-wise, I now have everything, but no one is here to help.

ehh could either way

that in combination is particularly damming

I thought they killed that in all OS’s (linux, windows, macOS) back in like 2014/2015 maybe I’m wrong, the only ways I know to patch in or out is in windows

TSX was useful for PS3 emulation so that might be a place to start looking

So, good news, was able to hit 3600CL17 at 17-20-18-38-1T at 1.45V with my quad stick dual rank E-die. (rated for 3200CL16 16-18-18-36)

Max mesh I was able to hit without instability was 30x at 1.2v.

No updates on the Hackintosh front but I’m abandoning Thunderbolt on X299 and saving it for Z390. Even on Z390, official support didn’t work at all, but using the jumper mod did work if the Z390 chipset didn’t interact with the Thunderbolt controller in UEFI.

Did a manual voltage OC rather than offset because offset adaptive overvolts like crazy. Median 4.7Ghz was achieved on 1.227v (non-AVX, but using stock AVX offsets) with 4.8 on the fastest core and 4.6 on the 4 slowest and hottest cores.

Oof. Did not survive a pure cold boot (fully powered off, no power physically going into the PSU) after a few days being on. Training on 17-20-18-38 was rough coming from cold boot, so had to go to 17-21-18-38-1T at 1.455v. Micron E-die doesn’t like low tRCD at high frequencies.

Also instantly failed to train memory once when MCE was turned on, (successfully trained past the first try on the next retry but still concerning.) but the heat from MCE was far too high for my uses so I turned it back off.

CL17 did not like the memory. It would at complete random fail to train at startup or after a quick reboot. The weird thing is it’s fine in the OS, but would fail to train sometimes on boot, then rarely continue that failed training into stressapptest errors.

I only ran 1.455v to the sticks so I’m hoping it’s not E-die degradation (since I heard this is actually a thing at higher voltages)

4.7Ghz was also too much for safe voltages and temps for the EVGA CLC 280. Now it’s 4.7 on the fastest core, 4.6 on good cores, 4.5 on good cores near high thermal density, and 4.4 on bad cores. Vcore in BIOS is 1.218v now.

Extreme VRIN LLC was also behaving weirdly on Vcore scaling so I dialed it back to Turbo and increased VRIN to 2V. (It’s the FIVR drive voltage) This fixed Vcore scaling oddities.

My heart stopped for a moment when AVX mprime (Prime95 for Linux) used 720W in small FFTs portion of the blend workload. It overwhelmed my EVGA CLC 280 extremely fast.

AVX512 would have started a housefire at 1000W.

They weren’t kidding when Skylake-X using Prime95 sucks down craploads of power.

That writing was on the wall when Intel had a 1000W water chiller hidden under their demo rig.

3 Likes

I’m just surprised the silicon was stable with that much current.

That’s impressive.

Yeah, increased about 0.01v to 1.233v and AVX, however brief it was, was stable with total system power at 720W. (Also have a 1080 Ti in the system idling at “prefer maximum performance”) That’s where I’m leaving it. And 1.232v without AVX was fine for a brief Small FFTs and a full Large FFTs run.

My PSU is the AX1000 Titanium, but if I was going for 4.7 all core AVX stable, I’d need a 1600 T2 from EVGA, and by then I’d enter competitive OC territory. Already lapped the CPU to improve thermals and delta between cores.

That was LGA 3647 on 28 cores though, I’m on LGA 2066 on 12 cores. It still produced 720W total system power during AVX Prime95.

Sad news.

The 10920x blew up.

Left a single core AVX Prime95 workload overnight and it finished stable, (well within temperature thresholds at 61C) but when it went back to all core, there was evidence too much amperage went into single core loads and it blew up the Mesh and VCCSA.

It’s no longer stable at stock now.

I had no clue single core AVX Prime is this damaging. I absolutely hate myself that I just blew up a $1000 processor. I used a lower AVX offset because MPV was downclocking when playing back 8K video, and then I started stress testing Prime95 without a higher offset… This is my downfall.

I’m typing this on my 4960X system because the 10920X system had a processor with a lapped IHS so I have no warranty, and it being unstable at stock was a huge blow.

1 Like