I have as of recently purchased a r9 270x for my older system, but it appears to be running unstable. The symptoms are as follows: ~1 hour of gaming > blank screen, sound for 0,5 second, pc 'shuts off' but fans are still spinning' > manual reset > 2-5 minutes gaming > blank screen etc. I am able to replicate this on an linux and windows environment
I had a near identical issue with my older gtx295 but I decided to upgrade it anyway. Was able to run it stable with 300MHz on the core after many years of usage. When I first assembled the pc it was able to run it at normal clocks just fine but then it gradually became worse and I became sad.
Anyway I think it's due to my PSU. It's a seasonic SS-650HT. 650W should be plenty to drive my system but I noticed it has 4 +12V rails each capable of providing 18A. The card is rated at 24A peak.
Could it be that the 2 6-pin connectors are wired to the same rail and end up overloading the psu, or maybe the gtx295 killed it? I could probably RMA the video card but I don't think it's the cards fault.
Well, this is the best I have got. I might find some lower wattage ones somewhere and try have them push just the graphics card with some molex adaptors.I think I could borrow one from my dad when he's at work.
Would it not be possible to distribute the power evenly across the rails somehow? I can't find any spread sheets on how this psu is wired so I guess it would involve a bit of guesswork
Doyou have another computer to test your 270x to see if it is faulty?
And to your question,
4 x 12V rails (EPS12V style)
Originally implemented in EPS12V specification
Because typical application meant deployment in dual processor machine, two +12V rails went to CPU cores via the 8-pin CPU power connector.
"Everything else" is typically split up between two other +12V rails. Sometimes 24-pin's two +12V would share with SATA and Molex would go on fourth rail.
Not really good for high end SLI because a graphics card always has to share with something.
Currently Nvidia will NOT SLI certify PSU's using this layout because they now require PCIe connectors to get their own rail.
In the non-server, enthusiast/gaming market we don't see this anymore. The "mistake" of implementing this layout was only done initially by two or three PSU companies in PSU's between 600W and 850W and only for about a year's time.
4 x 12V rails (Most common arrangement for "enthusiast" PC)
A "modified" ATX12V, very much like 3 x 12V rails except the two, four or even six PCIe power connectors are split up across the additional two +12V rails.
If the PSU supports 8-pin PCIe or has three PCIe power connectors on each of the +12V rails, it's not uncommon for their +12V rail to support a good deal more than just 20A.
This is most common in 700W to 1000W power supplies, although for 800W and up power supplies it's not unusual to see +12V ratings greater than 20A per rail.
I ran prime 95 + furmark at the same time to get the worst case scenario power usage from my cpu and gpu. The GPU peaked at 68C while the cpu went all the way to 99C and started throttling, it did however run stable. After 30 minutes I stopped both tests because I got bored.
I proceeded with some Team Fortress 2. GPU peaked at 55C, cpu at 70C Was able to play about 80 miunutes untill the pc gave up. Then I quickly launched Skyrim and was only able to play for about 2 minutes. I repeated that but I pulled out a spare fan and had it blow air into the PSU and opened the side panel. Still almoast exactly 2 minutes in skyrim. I directed the fan on the north brige and cpu, still same result so I disconnected my cd drive and 2 harddrives and directed the fan at the chipset. Played 15 minutes of skyrim and then got bored.
Will see if this setup will be any better and report back with the results.
Nope. It still crashes, but this time it took 40 minutes in between, so i guess that's an improvement. The fan on the chipset was not necessary it seems, just removing a few hard drives did the trick.
Also I did notice that the GPU turns off its fans the moment the screen goes blank. Though I think it's a feature of the gigabyte windforce series, where it turns them off when they're not needed.
At this point I really think it's the PSU. Should I pop it open and see what can be done about maintenance? The fan appears to be making a weird scratchy noise, probably the ball-bearing so I might have to replace that.
It's an i5 750. I'm using it with the stock cooler so that's why. Anyway it doesn't go anywhere near that high in everyday situations so I'm not too concerned. Well, it idles at about 40C and goes up to 70C in games, which is good enough for me.
I'll reapply some thermal paste later when I pull out the system anyway.