AMD Future above 1080p?

http://www.pcworld.com/article/2600307/amd-reminds-the-world-that-it-has-8-core-cpus-too.html

AMD is focusing on two primary applications for its new CPUs: Content creation and gaming beyond 1080p resolution. But it’s also targeting budget-priced systems, describing an theoretical “enthusiast gaming PC” powered by an FX-9590 CPU, a video card with a Radeon R9 290X graphics processor, and 16GB of DDR3/2133 memory that would sell for $1499. AMD says a system builder could sell a “performance gaming PC” with an FX-8370 CPU, a video card with a Radeon R9 285 GPU, and 16GB of DDR3/1866 memory for just $1099.


 AMD has had more experience with 8core for the consumer market, considering their 8cores have been about 200 since 2010 (their first line of consumer octas). My question should my 8320, the one I have had for like 2 years, be fine for 4k say 2 years from now? Should you really see any performance degredation between current AMD and 2 years from now. Even Intel? I mean I know every new intel generation, a leap seems to be made. With AMD it doesn't anymore. I mean the new octas are literally the same from when I bought my 8320 nearly two xmases ago.

The only competing cpus intel is offering are quadcore and dual (i5 and i3 are, the i7 and amd FX series isnt really a competition for intel).  So taking it into account 2010+ you dont really need to upgrade your cpu for say 10 years 2010 - 2020. All that is needed is maybe future ram upgrades, but primarily gpu. I mean the GPU is really the only bottleneck anymore in anything. All I have to do is get a new mobo that supports am3+ and pcie3.0.

So 2 years from now, what would be better and i5 purchased today or a 83xx series today? I would say the FX, for the amount of l2 and 3 cache. What are your thoughts. AMD basically future proofed their CPUs for this decade (from what I can tell), they arent the fastest, but they can get more done with more multithreaded apps. Once games support +4 cores, it will be interesting to see. I remember Logan doing a video on this about how people were trashing AMD, and that Microsoft purposely made intel run better than AMD. Benchmarks showed it as a result, gaining intel more fandom. Logan got a hotfix emailed to him and he said it was noticeably different and was on par with whatever i5/i7 it was at the time. I think it was i7, but I dont remember for sure. That video is what sold me on the 8320 in the first place.

Feel free to disagree but the problem as I see it at the simplest level is that only a single core is ever used to feed the main render thread that tells the graphics card what to do. You can have games using multiple cores but that is really to keep other tasks (A.I., loading new textures from disk, networking etc.) away from the main render task. This is why most games don't scale much beyond 4 cores. 

I used to own an FX 8120 and an FX 8320 overclocked to around 4.3 & 4.4 GHz respectively but replaced them with an i5-3570 and an i7-2600 because the two games/simulations I dick around with a lot are FSX and XPlane X. I could play with a CPU affinity settings and tune FSX to use 8 cores (AI aircraft, loading new textures, carrying out the maths intensive tasks that the terrible core engine didn't send to the GPU etc.) but the thing is even an i5 4570 that can't be overclocked and only has 4 logical CPU's but much better IPC yields vastly superior FPS than over overclocked FX 8xxx.

Whilst game engines are much better now and can offload so much more to the GPU it still remains that only a single core will be used to feed the main rendering task, ergo as graphics cards get faster the IPC of the CPU feeding them will also need to. Of course I expect things will change in future, such as some work being offloaded to cloud servers and things like Mantel will get better; but I doubt the FX 8xxx has much life once PC games move much beyond single screen 1080p and entry level graphics cards of tomorrow have as much power as a GTX 970.

 

I think you should look into this technology. AMD has found it very interesting, and I hope that I see it implemented so that we can get more performance from cpus.

http://wccftech.com/amd-invest-cpu-ipc-visc-soft-machines/

That sounds really cool, there must be some real challenges for them to solve to get that to work for most use cases. 

When working with database workloads we often break down a single query into multiple threads to run it across multiple CPU cores. This however doesn't always benefit the query performance as breaking it apart and re-combing it has a cost. It can also introduce waits as some parts of the workload will complete before others and will then have to effectively pause until the rest catch up. At a deeper level the CPU cores also have to be synchronized in order for the schedulers to know where to send the next workload, on any OS the more CPU cores and NUMA nodes the more expensive this process will be - it will be interesting to see how many cores and sockets this technology will scale.

Thanks for the link I'm keen to see how that technology develops :-) 

 

Isn't AMD's near future HBM? They're gonna use it in the R9 300 series and in their next APUs. Their next GPUs are definitely going to be designed for gaming above 1080p. And they get a whole year of that tech all for themselves because Nvidia bet on the wrong horse and now they're late to the HBM party.

Ironically, their next APU is still in DDR3 for cache.

not that i'm and AMD fan boy, but i want them to start pumping out better cpu's, the current ones are way to expensive power wise and it's great that they have those "8 cores" but they do not use them as effecient as possible and it is painful to watch. They do have a future in post 1080p gaming but it would very be helpful if they came out with a new cpu architecture within the next year or so because the current gen is getting up there in age with no viable options seen at this point.

i hope their R9 300's kick the crap out of nvidia, and show that there is some life in that sleeping monster, it has a solid payday for the next 5-10 years with the current gen consoles it's time to start seeing some next gen performance out of their line up.

APU is the future the problem is they need to have the consumer grade have like 2-3 more sockets. Imagine a motherboard with expansion slots for APU like the pentium II days. Each APU supports a chip that has R9 300 GPU & Octa-core CPU.

That's not irony dude.

AMDs future is this. APUs contain 3 primary things. CPU, GPU, and I think it should include ram. Imagine this, you go to add two to three apus to your system. With one real upgrade you have many cpu and gpu cores running in synchronicity. If ram were in the future (5 to 10 years) added to an APU, you could have 1 to 2gb of ram for dedicated graphics support or more depending on technical advancements. On top of that keep the traditional ram for cpu allocation and your set. I imagine only needing 4gb of cpu utilized ram on most setups if this future is plausible. 

the problem I see is the current order of things. hardware is created to optimize software.  a good example is how we have devices that can only render some of h.264 spec and expect the cpu to handel the hard parts.when it should be the gpu that dose the heavy lifting.

the same stupid expectation was applied to the development of google's VP8/VP9 and h.265(aka HVEC) now those codecs may give better quality at a smaller file size but are more complex to decode and encode. which is bad in an era where an ever increasing number of devices run on batteries. that said the led's for the devices screen draw more power then the chipset in most cases.

in games most of the complexity is in the algorithms for physics and AI. on the cpu side anyway the gpu side is really just a matter of having the right amount of shaders and memory you need for rendering an image It should more or less solve itself at smaller fab processes and more layers via 3D stacking of dies at least until ray tracing is mainstream.

the key to victory here is creating the right instruction sets and math co-processors that can handle the instructions. ideally at some point if we get it right things will be done in real time so you won't need memory except to store data.

technology is already pretty close to the human limit as far as sound and picture quality go for real time gaming. it's more or less 8 megapixels(3768X2120 on a modern 16:9 display) at 250fps and 2000khz of sound. all we need now is to make 3d glass smaller and more affordable and we'll be more or less at starwars level tech and half way to a startrek holodeck.