GPU Wars: nVidia's sneaky play with GTX 780 Ghz Edition?!

Some prior blog entries to help bring things into context below, if you are to have some background:

http://teksyndicate.com/users/rsilverblood/blog/2013/10/12/amds-strategy-mantle-trueaudio-steamos-and-more

(Something someone else wrote) http://teksyndicate.com/users/commissar/blog/2013/10/18/so-nvidia-780ti

http://teksyndicate.com/users/rsilverblood/blog/2013/10/21/gpu-wars-episode-v-nvidia-strikes-back

And some articles about recent GPU news from nVidia:

http://videocardz.com/47388/nvidia-drops-prices-geforce-gtx-780-gtx-770-graphics-cards

____

http://videocardz.com/47420/nvidia-updates-geforce-gtx-780-ghz-edition

Is someone trolling us? Ghz Edition is a play straight of the AMD manual, pulled on the HD 7970 right after the GTX 680 was launched. Will nVidia try using the same tactic on AMD now for some corporate-style "poetic justice" ?

The idea is intriguing. Although there are some problems... what about the GTX 780 Ti? And where does the GTX 780 Ghz fit into this?

Well, here's a quick price chart from the rumors so far:

AMD:

R9 290X -- 550$ USD

R9 290 Non-X -- 450$ USD (good news for this card, bad news for release date: www.techpowerup.com/193497/radeon-r9-290-non-x-launch-pushed-back-a-week.html )

R9 280X -- 300$ USD (it's getting a new revision: www.techpowerup.com/193366/amd-to-release-radeon-r9-280x-revision-this-late-november.html for lower heat output, which means higher overclocks and/or lower noise - let's hope partners like GIGABYTE, MSI and ASUS make a v2 or Rev2.0 to make it explicit that it includes the new GPU, a definite marketing point!)

R9 270X -- 200$ USD

R7 260X -- 140$ USD

nVidia:

GTX TITAN -- 1000$ USD

GTX 780 Ghz / GTX 780 Ti (who knows the name?!) -- 700$ USD (the amount of VRAM for this card is currently unknown, albeit I expect 6GB if the price is going to be 700$, but for 600$ it might only have 3GB)

GTX 780 Non-Ti / GTX 780 Non-Ghz (who bloody cares anymore?) -- 500$ USD

GTX 770 2GB -- 329$ USD (4GB price unknown?)

GTX 760 2GB -- 250$ USD // 4GB -- 300$ (price cut expected, due to price proximity with GTX 770)

GTX 660 Non-Ti 2GB -- 180$ USD

GTX 650 Ti Boost 2GB -- 150$ USD

GTX 650 Ti Boost 1GB -- 130$ USD

_________

These include the price cuts most recently announced.

The GTX 760 right now doesn't have much competition. The R9 280X is priced too high to compete with it directly, and the R9 270X can't compete very well against it in most scenarios.

This means right now the GTX 770 is going head-to-head with the R9 290X cards, including the updated revisions. This seems to be a no-brainer to go with the R9 280X with revised GPU and aftermarket cooler (supposedly coming in late November, or mid-January at the very latest). This is where the battle will be fought.

The GTX 780 for 500$ is going to compete with R9 290X and R9 290 Non-X. This will be a heated battle, be sure of that.

The GTX 660 Non-Ti is going head-to-head against the R9 270X, and it seems the 270X has everything it needs to win, no contest.

The GTX TITAN seems to be just as a sort of hybrid card for compute, rendering, video editing and so forth. More of a professional grade card with gaming features, which allows it to sit somewhere in price between a Quadro and a GTX 780, even if gaming performance isn't as good as a GTX 780. AMD has nothing to compete against GTX TITAN in the hybrid space, albeit it can more than match it in the gaming performance arena.

Thus, here's what we're left with:

GTX 650 Ti Boost 2GB versus R7 260X = (I'm sorry, as a hardware enthusiast those cards don't interest me enough to check their benchmarks.)

GTX 660 Non-Ti versus R9 270X = 1 point for Gryffindor... *ahem* I mean, AMD!

GTX 760 versus... seems the opponent hasn't shown up. AMD forfeits! = 1 point for Slyth... I mean nVidia! (*ahem*, seems I have a slight case of distraction today)

GTX 770 versus R9 280X (revised) = 1 point for AMD! *Mortal Kombat announcer tells AMD: finish it!*

GTX 780 versus R9 290 Non-X = Unknown (for now)... *Include dramatic music to give you the feeling of suspense...*

GTX 780 versus R9 290X = Unknown (for now)... *Even more dramatic music... include weird eyebrow movements you'd find bad soap opera actors doing whilst looking at the camera.*

GTX 780 Ghz/Ti versus R9 290X = More unknown... *Insert most dramatic picture of kitten hanging on a tree branch because of it's negligent owner for added suspense...*

This leaves us with... 2 points for AMD, 1 point for nVidia. With gaming bundles, this gets even sweeter. nVidia will soon offer gaming bundles as well. ( http://www.anandtech.com/show/7430/nvidia-announces-holiday-geforce-game-bundles )

AMD has the Never Settle Forever bundle thing going on, which is pretty darn cool... but the R9 and R7 series of cards need some bundles as well. And AMD hasn't yet added them. I guess they're waiting until the R9 290 Non-X is released and custom coolers are put on their GPUs. I guess they want to catch the Winter Holiday (Christmas, Hanukkah, Kwanza, winter solstice, etc) gift-giving fever everyone is put on when we reach the dreaded *insert dramatic drum beat rhythm* Black Friday... (not to forget about the Winter Steam sale which will happen...)

(If you've just felt something tremble in your pocket, that was you wallet. If you just heard screaming in the distance, it's because your bank account just figured out about the next Steam Sale. If you heard somebody banging his head against the wall or tables being flipped, it's because you're probably the only person on the forum who hasn't heard of the Steam Sales yet.)

____

Have fun, guys! Cheers! (I included links this time. Yay for me and whatnot. Anyone like this blog/thingie/whatever?)

AMD has nothing to compete against GTX TITAN in the hybrid space, albeit it can more than match it in the gaming performance arena.

Not true, even a cheap AMD card outperforms the Titan by a couple of times in OpenCL compute performance, nVidia has no Heterogeneous or hybrid computing solution at all, there are two independent projects that have reached a dead end, and there has been no further development at all, whereas AMD APP / AMD Fusion aka AMD AMD Heterogeneous Computing Architecture, is a thing since 2007, and AMD leads an open-source non-profit industry special interest group with Qualcomm, Texas Instruments, Samsung and a couple of other major players in the ARM- and ASIC-realm to make the CPU+GP-GPU architectures to the new standard for the next generation of PCs, and that's exactly what's happening right now, with EA-DICE already using a small part of APP, Mantle, for Frostbite, with Crytek working on intergrating AMD APP completely for it's extremely demanding next-gen CryEngine that will run on linux and thus benefit from AMD APP, and a lot of open source gaming engines being rewritten to use AMD APP.

At the same time, nVidia hasn't been able to make linux drivers that even work on kernel 3.10/3.11/3.12, and they haven't even shown off their GK118 platform yet, which means that they can't get it to work properly, otherwise they would have at least shown it off to counter AMD's success.

Even the Intel Xeon Phi doesn't work as it should yet, and that card is only capable of about the same compute performance as a mainstream AMD card for more than 5 times the price.

So AMD is far ahead in terms of GP-GPU and definitely has the edge for next-gen games.

As always, the question remains: but will it run Crysis, if the PC version of Crysis is optimized for AMD APP and running on linux?

Hey Zoltan,

I would really like to read more about what you posted. Would you mind providing some links about all that?

Thank you very much!

Very good Zoltan. +1 for you.

I do think GTX TITAN is pretty darn powerful. I meant that GTX TITAN has the double-precision power that beats what AMD has to offer (except FirePro GPUs) for now. That's because GTX TITAN has one third of the double-precision processing power of a K20X GPU. This hybrid segment for nVidia might be interesting, and I don't think AMD plans to trickle down the double-precision compute power to their top-tier gaming GPUs, because it might otherwise harm their enterprise and/or supercomputer GPU sales.

I agree with everything else you've said. It seems awesome, this new future for AMD. I just wish they were more profitable, and that they could change the PC architecture by making Heterogeneous Architecture a new industry standard that's almost everywhere (like 64-bit CPUs these days).

Meh the GTX TITAN is a marketing card it was released to make the GTX780 look cheap (the 780 was far from cheap when it was released and more expencive than a gameing card has ever been) none with any sence whent out and got a titan over a 780 for gameing.

I posted them here and there over the course of time on the forum, but here are a couple of them:

1. The freely available AL2.0 licensed (open source but with the possibility to build closed source software based upon the open sourced software provided, which is necessary for instance for game engine devs) AMD APP SDK, can be found here: http://developer.amd.com/tools-and-sdks/heterogeneous-computing/amd-accelerated-parallel-processing-app-sdk/

2. Some demos can be found here: http://developer.amd.com/tools-and-sdks/heterogeneous-computing/amd-accelerated-parallel-processing-app-sdk/samples-demos/. There are a lot more applications that work already, but AMD focuses on H.264 because nVidia is marketing it's cards with that technology built-in (streaming function, streaming to nVidia Shield), whereas the AMD APP acceleration of H.264 destroys the fixed performance hardware H.264 encoding in nVidia cards and can stream to whatever device, and is not locked in to only certain monitors or TVs or ARM devices or web services, but can be used in all applications that require fast H.264 encoding. There is a video by some Russian on YouTube playing CrimeCraft that is encoding H.264 for streaming at 60 fps constant with an AMD graphics card using APP acceleration, that shows the huge potential and performance of this system: https://www.youtube.com/watch?v=-G3Y45XR5xY. He also plays his games in a Windows 8 virtual machine on linux for maximum performance, using VMWare ESX. So basically, the Windows container with direct hardware access performs faster than on bare metal windows, giving the game a very high fps, never under 60 fps, whereas the linux base system uses AMD APP to accelerate H.264 encoding and does the stream recording at 60 fps. AMD APP offers limited support for Windows, but in that case, it doesn't matter because the linux base system and the windows virtual machine both have direct access to all of the system resources because of hardware virtualization.

3. Some open source projects AMD participates in to provide OpenCL/GP-GPU acceleration can be found here: http://developer.amd.com/tools-and-sdks/open-source/. This is a bit of an older list, I've seen multiple projects that have jumped on board that are not on the list, but notable are amongst others the java and python acceleration technologies, and for instance the firefox browser acceleration functions using AMD APUs and GPUs.

4. The HSA Foundation can be found here: http://hsafoundation.com/. All the leading ARM-manufacturers (Qualcomm, Texas Instruments, Samsung) and ARM itself, are members of this foundation. The focus of AMD on tools to convert java to OpenCL is easily explained by looking at the scope of this foundation: ARM devices use mainly java on linux, AMD is probably not going to stop with linking the CPU to the GPU, but is going to integrate devices further into some kind of "computing cluster", whereby ARM-devices can offload heavy workloads to the CPU+GP-GPU-PC and PC's can offload certain tasks to the ARM-devices, for instance graphics functions, like the PC runs a game at very high performance using CPU+GPGPU, but offloads the actual task of rendering the pixels to the ARM-device that is connected to a TV or is handheld for instance. Sadly, even though this is all open source technology, the main contributors are industry giants, and there isn't a lot of info available on how far this has developed already.

5. This is the link where the implementation of AMD's HSA started with, now more than a year ago, where the performance increase was still limited to about 20% (which is huge already), without overclocking, and running on AMD APUs, so with less compute cores and less GPU clock speed and no dedicated GPU workload buffer memory, which inhibits performance in comparison to discrete CPU+GPGPU systems: http://news.ncsu.edu/releases/wmszhougpucpu/. In the mean time, average performance increases have reached about the 40% level for mainstream computing, and in some games and compute applications, way more than that. There are daily new reports of research by open source projects and devs and researchers on this, just look for the kind of applications you would like to see the status questionis for.

6. As to compute performance, I posted some benchmarks a few days ago (I think it was in a 3D rendering thread) that were really recent, comparing compute performance between nVidia and AMD cards, where the most expensive (1800 USD) nVidia card, was barely performing better than the cheapest (400 USD) AMD card, and where the cheapest AMD card was performing 5-6 times faster than the nVidia Titan. Add to that that nVidia cards are limited in GP-GPU application to CUDA, which has ever less support and is not a great performer, and that nVidia GPUs typically have a narrower bus width than AMD GPUs, which bottlenecks the cards' ability to perform parallel processing enormously, and it's clear that AMD GPGPU cards are just a much better performing platform for the moment, and they're cheaper.

I dont get why the titan just keep floating around with that price.

Most discussions I have about that always end up with 'yeah, but the titan is a computing card'. 

Do they even know the 7970 has a better openCL performance than the titan?

I think AMD has decided - when they cleaned house on their top floor - to adhere to other definitions of "profitability" than the amount of dividend that is paid on shares. I think they got a lot wiser over the years. They are often underestimated, but they were the first to crack several clock speed limits, they were the first to bring a 64-bit CPU to market, they were the first to bring multi-core CPUs to market, and they've always had a pretty modest marketing going on in comparison to Intel and nVidia, a bit like IBM always has a pretty modest marketing going on. In the mean time, they are in cahoots with Texas Instruments, Qualcomm, ARM and Samsung, whereas Intel is in cahoots with Samsung for the development of Tizen, which however is owned and controlled by the Linux Foundation. AMD actually sells Intel chips in several of its SeaMicro products, and the distance between AMD and Intel might just be smaller than they make the market believe. I think they're actually dividing the market between each other, with Intel focusing on Windows-only consumer hardware that is very low power, with the new Atom-range (which is basically a very low power very cheap to produce APU, soldered on a mobo with a bootloader lock that prevents it from running anything but Windows 8.1, and even flashing the BIOS didn't crack the bootloader lock, and the products are sold at a very premium price, with a lot of profit margin), and at the same time focusing on larger enterprise systems and direct competition with IBM, and AMD focusing on everything in between that, taking on the more difficult markets, which they can because they can compete on price because they don't own their fabs and have much less SKUs, their products being very similar, and moving towards APU systems that are easily expandable with GP-GPUs for modular performance scaling within a wide range of budgets. I also think that Intel - which has been lacking proper iGPU integration for a long time now, with the iGPUs in their CPUs not supporting OpenCL and their Xeon Phi not showing much progress in terms of actual implementation - is letting AMD break the next-gen PC market, is going to wait until that has destroyed the share price of nVidia, and is then going to try to buy nVidia for cheap. It's not a secret that a year ago, nVidia wanted to sell out to Intel but Intel wasn't interested. My guess is that Intel and AMD have set up alliances with Asian industry giants like Samsung to prevent nVidia from selling out at a high share price to the East, so that they can get their hands on it for cheap later on. nVidia still wants to sell out, or they wouldn't have stopped developing working linux drivers, which was one of their main prides until they tried to sell out to Intel medio 2012, and they would have had a working GK118 based product ready for demo to counter the huge technology advancement of AMD in their main market, where at the end of this year, AMD will have conquered over 40 % of the market according to analysts. This can only be explained by a strategy of nVidia to do brand marketing, to keep their share price up, at the expense of doing R&D, so they're cannibalizing their material assets to maximize their existing IP, in the hope of getting a premium value for their shares, but they're wrong in their assumption that the industry can't wait until their bubble bursts. All this is my personal analysis of course, it's pretty tentative.

The 7970 has better compute performance than the Tesla compute cards lol, but they've known that for a very long time, as they started censoring and locking thread that discuss that on their own devtalk forum a year ago:

e.g. https://devtalk.nvidia.com/default/topic/519011/cuda-performance-vs-opencl-performance/

They're just not doing anything about it.

Don't forget NVidia has crippled the 6xx and (maybe) 7xx compute performance just so that the (maybe) Titan and Tesla look good.

AMD also fails to segment their CPUs with GHz, which doesn't work as they don't lock lower tier chips like Intel.

Also, the reference 290 has the same cooler as the reference 290x... Expect no good overclocks!

+1 for you, Zoltan. You know your stuff. =) I'm impressed.

If you could use more paragraphs to separate that reply so it's more comfortable to read, and include links to your references, that would be epic! =)

I'm also surprised you seem to have such a good memory of AMD's history in news and computing history. I've found it hard to find others who have such a good memory of PC history. =)

+1 for you, Zoltan. (That makes the number of +1s for Zoltan to a grand total of... OVER NINE thirds... =P )

Sorry mate, I type while I'm on the phone most of the time, mostly in another language, so it just kinda flows out of my hands without much structure or thinking about it. I'll try to format everything some, but I can't really allocate more brain resources to it without losing focus on my telco lol.

Yeah but that's just traditional AMD strategy. nVidia locks down the OEM manufacturers in terms of specs, they can only overclock so much or use so much power etc..., but AMD leaves everything to the OEMs, doesn't limit any specs, and their reference designs are traditionally pretty boring, but cards like the Asus Matrix series do more than make up for that. It's logical for AMD to do this, they release prosducts with a focus on value for money, if OEMs want to make premium products with their hardware, that's their responsibility, and they get to keep the premium profit from those too, that's just the way AMD rolls.

fucking zoltan for president! love reading your posts buddy.

No problems. If you can get on a Desktop to format some of your replies or edit them, and maybe include links, that would be epic. I'd love to see you write some opiniative articles on the Tek as well. I'm sure others would also like that.

You're very knowledgeable, and I'd love to read your opinions about the GPU Wars, Intel vs AMD, Samsung vs Apple, PC vs Console, and more. =)

Thank you that is really helpful!

+1 for you ;-)

(how do you manage to write that much text on your smartphone? I already get annoyed after one sentence...)

Couldn't agree more. We seldom see someone sharing this much wisdom with us (and being able to support it with links as well)

+1 for rsilverblood as well. You guys are just awesome

I'm typing it on my PC, but while I'm on the phone talking to people for work, I have to concentrate on my conversation, that's what I meant.

I don't type that often on my smartphone, what I meant was that I'm typing while I'm having a conversation on the phone for work, mostly in other languages than English, and I have to concentrate on that mainly, so I can only allocate very little focus to what I type and how I type it, which is why I don't properly format it and make mile long sentences with a lot of antecedents and stuff, basically raw data from partial memory dumps.

I'll try to write a little more structured and in better English, I know it's a problem, I'm just a waterfall of words, I'll try to remember next time I type, but I know myself, don't get your hopes up too high, I'm still a man, nice burst performance but limited multitasking abilities and all that crap lol...