[Solution Found] I NEED HELP! 3960X Seems to never be really stable!

And another BSOD, interesting that both were about 1 hour after booting for the first time on the day.

What I tried this time:

  • Run game not on the second GPU (RX580)

What I’m trying next:

  • Reduce RAM Speed to 3200 and FLCK to 1600
  • Leave timings as is

What was running:

  • Browser (Basilisk)
  • Chromium (Doing nothing)
  • Another Chromium instance
  • Notepad++
  • VMWare Workstation 14 (2 VMs about 5-6GB RAM load and almost no CPU load)
  • Wallpaper Engine
  • A game running in my primary GPU (3080 TI)

Another BSOD, just about 45minutes after the last one.

What I changed:

  • Reduce RAM speed to 3200
  • Reduce FCLK to 1600

I got another second GPU on the way, this time an Nvidia card.

Should I get Micron or Samsung 3200 ECC RAM? @gnif @wendell
This is all highly frustrating and irritating.

post some pics of the inside of your case, clear shots of your mobo, etc.

1 Like

With or wihout the PCIe cards? I’m asking because with all the cards in there there is almost nothing to see from the mobo.

well we can start with everything installed and go from there

1 Like

Ok I hope this helps.

I know that the 8pin from one card to the other seems not well thought out, but it’s the only thing that I can do here, there aren’t enough plugs on the PSU, which are accessable.
This was not the case when I had the 1080Ti instead of the 3080Ti , with the 1080Ti and the RX580 both had their dedicated plug and it still got BSODs.

PCIe cards from top to bottom:

  • 3080Ti
  • 4port NIC Broadcom BCM5719
  • RX580 4GB
  • Creative Sound Blaster ZxR
1 Like

Already tried with just one gpu the 3080Ti?
By taking the RX580 out of the system.

1 Like

No not yet, I’m waiting on a GPU, which is on it’s way to me, to replace the RX580, as I need the monitors that are hooked up to it.

My guess here is to check if the RX580 screws me, but not if two GPUs screw me over right?

Molex 4 pin aat the bottom and pcie 6 pin at the front edge – please connect those to power

2 Likes

The molex is connected, I can’t conect the 6pin as I have no more plugs on the PSU side which I can access. The last one is blocked by the SATA ports of the board.

Might be worth running an extension from a forked cable. The symptoms are mildly consistent with voltage droop.

Could try one gpu and that plugged in to see if that stabilizes?

3 Likes

That is something I wouldn’t have thought of in a million years, as I never experienced that before. I will try that after I got my new RAM and new second GPU, if that’s ok? I will try to get some extension in the meantime, so I can plug that 6pin.

If it’s still not better after that then I know I have to check the board, which I’m dreading. I heard ASUS support isn’t very good in those things (RMA and so on…).

Oh btw. I got some Samsung 16GB 3200 ECC RAM modules on the way, as that was more or less the only choice… If I have to spend money again then it should at least be an upgrade, are they ok?

Ok I found another daisychain cable, only thing I got atm, and connected this with the 6pin on the board and one end of the 3080Ti adapter, sadly that’s the only way I can do it atm.

GPU and RAM may come next week (Crossing fingers).

Btw. is that 6pin supplementary power only to the CPU or also other components on the board?

Today the new components came, seems the Memory has Samsung B-die (5WD BCWE), I hope that is not going to be an issue again, as with my first B-die kit and this board.
New GPU and RAM are in, I so hope that’s it and it’s going to be stable from now on. Time will tell I guess.

My guess here is, if it’s still unstable then it would either be the CPU, Motherboard or PSU?

Guess my new (used) second GPU is a dud:
Not even after 24 hours I’m getting a BSOD with the message “VIDEO_SCHEDULER_INTERNAL_ERROR”

And this beauty of weirdness:

That was on all monitors that are connected to the second GPU.
Btw. the GPU is a Zotac 1070 Mini.

Does anyone even read this anymore?

I’m reading, but can’t help fix it as I don’t have the setup and thus not the issues you mentioned.

:green_heart:

2 Likes

Haha thank you.
Sorry it’s just really frustrating atm since I’m battling mental illness and now also physical, and the one thing that should be stable in that situation is my PC to be able to distract me and it’s not…
I just feel very alone in my situation atm, as I have nobody in my environment that can help me…

1 Like

When troubleshooting I would look at the weird janky things I am doing first and set as much as possible to standard settings/setup.

Try unscrewing the PSU mount screws and see if you can move the PSU around a bit to get to the last 6pin plug, and remove the daisy chain.

That new GPU clearly looks like it has a problem, it may be solvable but better to put the 580 back in for now.

Next thing I would check is removing the 580, do you really need the other screens from the second GPU?

Then I would look at the PSU. I had problems with random BSODs for over a year, until it got so bad my PC could barely boot for a period, then it would be ok for a week etc. My BSOD were different, but pointed to RAM errors which was confirmed with memtest. After replacing the RAM everything seemed hunky dory until more BSODs. Finally after replacing my PSU it fixed everything. A dodgy PSU is hard to diagnose and can damage other things in the meantime. Serves me right for buying a second hand PSU from Amazon (which also had a dent on the corner).

It could be since moving the PSU to the new build, it can’t provide enough stable juice.

2 Likes

Phew… to remove the PSU I would have to remove everything out of the case, as it’s a small case. But I agree with you here, I will have to check if there are even enough cables that came with the PSU for it.

I’ll try a bit with the 1070 for now and don’t run a game on it in the background for the time being. Then I’ll switch again if it happens again.

Yes I do, I got so used to the work flow with 6 monitors and one for systemmonitoring, so 7 monitors, that it’s hard to go back even for a bit, but it may be the time to remove the second GPU just to test, but I actually don’t know for what I’m testing here. My mind currently is in a bit of shamble.

My PSU is new-ish in that I bought it about 1 year ago maybe a little more? I can check the voltages with my multimeter and I do on occasion for what my PSU is powering all Voltages seem fine. It can deliver 1200W (Silverstone so not a no-name brand) and I’m using about 400-500 in idle/low workload and about 800-900 under heavy load (measured through a watt-o-meter).

But can the PSU be fine with one system but not with the other when the only thing which changed is the CPU, motherboard and RAM, and the ouput it does is basically just 100-150W more?

Thank you for sharing your story :slight_smile:

I am not a PSU expert but I do know if you draw too much power from a PSU in a peak load it can cause a brown out where the voltage dips, which then can cause problems. The PSU power rating seems to be around the sweet spot under heavy load, so that shouldn’t at first indicate an issue.

There may be short spikes of power which isn’t showing up on the power meter although from your description, the crashes are happening even at idle so it’s probably not a power rating thing?

With my system my faulty PSU was massively more than I needed, 650W 80+ Gold when all I had was a 2200G and a single SATA drive (this was back during the previous wave of GPU shortage in 2018 and I upgraded later) which was no more than 200W probably. So it was more about the PSU giving bad voltages than not enough power.

The only way I figured out it was the PSU (also no equipment to test voltages) was to swap it out completely with a 500W PSU from one of my dad’s servers. Eventually the server started having problems as well and that finally confirmed it so I bought a new PSU to replace it. I’m actually holding onto it to maybe put into a custom 3D printer build, I’m hoping the 32 bit printer boards are a bit more resiliant to variable voltages since they are often designed to run on a range of input voltages.

I digress, it may be worth trying a different PSU to rule it out, although that doesn’t necessarily rule two different dodgy PSUs (not unheard of) but it’s pretty unlikely. I just looked at the price of 1200W PSUs and they are pretty high. The price gap between 1000-1200W is like 2x.

From your OC you said you had tried a different PSU already? Maybe two smaller PSUs combined or a smaller PSU dedicated to the GPUs or something could work temporarily? These “Add2Psu” allow you to use multiple PSUs in a system, although this is getting janky again.

1 Like