GPU Wars: Episode V, nVidia Strikes Back

Alright, first let me apologize for the cheesy title. But I wanted something which had the appropriate level of "nerdosity" to it, and I figured a Star Wars reference would suffice. But without much further ado, let me begin by explaining what this article is about.

Some of you readers may have caught my previous blog entry about AMD in the GPU Wars here on the Tek. If not, you can check it out here: https://teksyndicate.com/users/rsilverblood/blog/2013/10/12/amds-strategy-mantle-trueaudio-steamos-and-more

So, given nVidia's conference, why is it doing what it is doing now? And what are the financial, technical and strategic reasons behind their choices? Well, that's what I hope to cover.

nVidia demonstrated a couple of interesting technologies. FLEX is a combination of PhysX, APEX, global illumination and objects interacting with each other dynamically with somewhat realistic in-game physics.

They also demonstrated G-Sync, and announced the GTX 780 Ti. So... what does this all mean? (And if you thought of the Double Rainbow video on YouTube because of my comment, than you should spend an extra hour outdoors every day.)

Well, first let's cover some basics here. nVidia knows it's higher-end cards sell well, and that their core consumers are professionals and gamers. nVidia hasn't done well in the ARM environment, and Tegra 4 has been a gigantic flop. So much that nVidia is licensing CUDA core technology to other companies! That has to be desperation right there; you don't license your core product to your competitors if you aren't at least a little bit desperate.

nVidia knows it probably won't stay in the ARM race (oh my Hawking, was that a pun?!) too long anymore. It'll likely pull out of their Tegra 5 or whatever doesn't sell well, due to engineering costs.

nVidia sees some future in Streaming and recording. That's why it invested in ShadowPlay and other streaming technologies.

So nVidia is investing in it's gamer audience here. It's doing so by making sure better technologies make gamers more satisfied, and it only has to pour in enough money to make it better than AMD; it doesn't have to make AMD irrelevant, because otherwise anti-monopoly laws in the US would split nVidia up like a fresh-baked Supreme pizza. (Hey, it's late and I've worked for six hours straight doing physical labor, and I've biked for 7 miles there and back!)

nVidia figured out something which AMD needed to have done a long time ago: more AVERAGE fps doesn't mean a better gaming experience. First they used Kepler to help design a hardware and software combo that would cause less tearing, stuttering and whatnot, even in multi-GPU configurations - from day one!

nVidia then had engineers smart enough to look at the timing of each frame, when it was drawn and when it was rendered, and figured out that the monitor and GPU were not in sync, and that caused a lot of issues for gamers in terms of gaming smoothness. So nVidia wanted to fix that; they created G-Sync.

- Why G-Sync is awesome, and why you shouldn't care

G-Sync is nVidia's way of getting the monitor to stop having a *fixed* refresh rate. From what I can tell, the GPU renders a frame when in full-screen mode and then the monitor renders it immediately. This means the refresh rate is *dynamic*. Because of that, you won't have stuttering from one frame being displayed for two cycles, and another group being displayed once every second. It's noticeable for gamers, especially hardcore gamers.

So with this, it allows a much more seamless, fluid experience. A good way of explaining this is to think of throwing cards to the beat of a song. If you can't throw a card out in a certain beat, you skip it and throw one out the next beat hits. If a friend watching you was expecting one every cycle, he'd be a bit surprised. For a gamer, it's the same thing between him and his monitor.

G-Sync would be like you throwing cards as fast as you can. It's like a Poker dealer giving the cards out as fast as he can, rather than to the beat of a song. Much more fluid and natural, right? That's why.

It will allow games to be much more immersive. Unfortunately, it only works with TN Panel displays, but IPS might come eventually.

But why shouldn't gamers care? Because the chip that does this costs over 100$, even if it's a DIY upgrade kit for your ASUS VG248QE monitor! It's wayyy out of the reach of your ordinary gamer. Who would drop an extra 100$ on a monitor for a single feature? It sounds pricey, and it is, and it'll stay out of the reach of most gamers until it drops considerably. Right now, it's a gimmick and a marketing/hype ploy to lure those who hear too much and know too little. It won't be mainstream, few will be able to afford it, and really it'll be obsolete soon anyways.

Why would G-Sync become obsolete? Because nVidia has entered the world of monitors before with technologies such as 3D Vision and 3D Vision 2. And it hasn't accomplished anything. That's because 3D became part of the standard of HDMI and DisplayPort, and also because Windows 8 includes 3D support natively without the need of any special drivers. So in essence, it made 3D Vision obsolete (except for the monitors and glasses; but if that's so, than nVidia GPUs aren't needed, and then why does nVidia need to be there at all anymore?)...

So don't worry about G-Sync. It'll be coming to VESA, HDMI, DisplayPort, Windows and other products as an included/embedded standard. And you can bet Linux might be one of the first places to have it working without bugs, due to the sheer numbers of bug-hunters in the OpenSource community and the interest/enthusiasm Linux fans have for their software (you know who you are).

** EDIT ** : LinusTechTips just put out a video about John Carmack talking about G-Sync. And it's awesome!

http://youtu.be/gbW9IwVGpX8

As John Carmack himself said, "At 60fps each frame is rendered every 16ms. But now with G-Sync, it can take 17ms and nobody will be able to tell the difference." This means fixed-length frames are no longer needed. You don't have to wait two monitor refresh cycles to draw a new frame, and there's no need to view an incomplete frame. The monitor is just waiting for a new frame to be processed by the GPU, and then it'll put it out.

It's basically the idea of "put the frame on the monitor when the *frame* is ready, not when the monitor is ready". This means no tearing, stuttering, or anything else. This does give AMD a huge advantage, if they can develop something similar or wait until VESA, HDMI, DisplayPort and/or Windows (and maybe Linux) come out with something similar.

** EDIT (2) : Return of the Edit ** : If you check TechPowerUp's article on the adoption of G-Sync, you'll see ASUS has already done something towards that end.

http://www.techpowerup.com/192855/asus-announces-adoption-of-nvidia-g-sync-technology.html

But it'll cost 119$ more than the standard VG248QE. And the module for you to upgrade your existing VG248QE is rumored to be around 129$. Although these technologies are typically expensive at first, I sure hope nVidia doesn't pull a blunder by keeping it from everybody else from using it.

For nVidia, licensing this technology makes MUCH more sense. They can charge AMD for using this in all of AMD's GPUs whilst not charging themselves for it (competitive advantage much?), and they also get a huge chunk out of smartphones, displays, TVs, monitors, etc. Even if they charge only 2$ per device and 2$ per display (increasing cost of the equipment by 4$ total to the OEM before sale), this means they'll be getting a huge chunk of the market. It's like the royalties for HDMI, except this will probably not be as easily replaced. The idea of using *Dynamic Refresh Rates* for monitors using a single, simple mathematical formula is really smart. If code can be patented or copyrighted, than it's logical to assume that using a simple formula to calculate G-Sync refresh rates on monitors would be a very good way to keep others from being able to license such technology, ensuring nVidia can rake in the cash for years on end.

If this does trickle down in licensing costs and whatnot, than it'll be HUGE. If it remains at 50$ or more per monitor to use this, it'll be irrelevant.

Also worth noting is that nVidia should mention this only works with TN Panels. Although fluidity is good for FPS gamers, the immersion gamers benefit the most from this in PC Gaming, whilst smartphone/tablet users would benefit in all areas, and cinema fans would benefit the most from this technology in TVs. Unfortunately, it's exactly in the high-fidelity displays such as IPS, PLS, AH-IPS and IGZO (Sharp's 4K panels) that nVidia hasn't got this working. (nVidia would have to figure how to get IGZO panels to work with this before it can launch a G-Sync enabled 4K device... and in Linus's video, that may not have been explicitly said, but it's definitely implicitly said.)

Since most monitors that are sold are between 100$ and 250$, increasing a monitor's cost by 50$ would only be beneficial for high-budget gamers. In that case, we're looking at 4K panels, 1440p panels, and so forth. For this to really be a big selling point to the mass market, nVidia needs to drop the price down by a lot. I'd say 45~30$ per PC monitor at first is reasonable at launch. I'd pay that much more for that feature, even if I did have to use an nVidia GPU (and even though I am planning to buy an AMD GPU soon - oh dear R9 290 Non-X, where hath thou been all my life?). But I definitely would not pay an extra 130$ or more for that feature, even if it was on ASUS's own PB278Q 1440p monitor! I need a more compelling price, because I earn minimum wage right now, and I can't afford that much more for such a feature just yet.

I think nVidia really needs to rethink their pricing. 399$ for an ASUS monitor with a chip and a small PCB with some connectors on it sounds awfully expensive at launch.

- GTX 780 Ti: A new segment?

This one comes from some of my own comments on the GTX 780 Ti thread in the blogs.

I said the GTX 780 Ti would probably be a GTX TITAN with more unlocked CUDA cores, higher base clock (but lower boost clock), the same Double Precision Floating Point power as the GTX 780 (maybe a bit more, but not much), and that would be about it. I also mentioned that to keep GTX TITAN customers happy, they might make it more powerful for games but limited to only 3-Way SLI (since 4-Way SLI is only useful in benchmarks and whatnot anyways, since in gaming it creates way too much lag).

I mentioned I doubted it would be running Maxwell, since 20nm isn't out yet for mass production. And it would be very surprising if nVidia adopted Intel's Tick-Tock strategy of launching a new architecture and die shrink in separate generations, rather than in the same one.

nVidia seems to be entering a "four segment" sales strategy. GTX 700-series for gamers. GTX TITAN series for moderate professionals who are also hardcore gamers (think of gaming and rendering on the same GPU). Quadro for professional video editors and whatnot. And Tesla for supercomputer or computing applications.

This means nVidia might launch the GTX 780 Ti as part of their "pure gamer" segment, whilst making the GTX TITAN segment a hybrid SKU between GTX 780 and Kepler-based Quadro cards. That would make some sense, but the GTX-branding might have to go in the long run.

I think nVidia has to make something faster than the GTX TITAN because it needs to compete with the R9 290X. So I'm guessing they'll make something faster, but with less compute and SLI capabilities. This way, it doesn't seem like that much of a stab in the back of GTX TITAN buyers (who, let's face it, felt betrayed when the GTX 780 came out anyways as it was only 3%~4% slower than the GTX TITAN for only 65% of the cost!), since it would keep many of the advantages of the TITAN still there. Basically, it's a GTX 780 which is faster than the GTX TITAN but only in gaming. It makes much more sense.

nVidia could design a new chip, but that would be reckless. nVidia doesn't have to pour in massive amounts of money to design a whole new chip, considering 20nm should be here in 9 months or so (Q3 2014 is when 20nm mass production should be ready). Right now, it seems they just want to keep AMD on it's toes. It makes much more sense, economically, to recycle the GK110. It has 2880 potential cores to be used, after all. And it does have a lot of unused potential still there, waiting to be tapped into.

Also, consider this; they'd have to design a new chip to be used only for a short while, they would have low yields for something like the rumored GK180 at first, they'd have to test it, optimize the drivers, validate it, etc. It makes much more sense to not have to engineer a new solution, but instead recycle your old stuff and get more performance out of it. (It worked for Google Fiber, didn't it?)

** EDIT ** III: Another Sequel to yet another edit *sigh*: Seems the GTX 780 Ti specifications are in, according to Hexus.net at least. Check below:

http://hexus.net/tech/news/graphics/61445-nvidia-geforce-gtx-780-ti-specifications-revealed/

Seems the GTX 780 Ti gets a massive clock boost, one more SMX cluster, more Texture Units, and ends up being a bit more powerful in GFLOPS than even the GTX TITAN was. This means it's probably *slightly* faster than the GTX TITAN in games, but only games. And it'll probably only do 3-Way SLI, just to not upset it's high-dollar spenders and uber-benchmarking bragging rights crew. (Because, given the lag we'd see on 4-Way SLI in gaming anyways, 4-Way SLI is just for bragging rights and benchmarking scores, nothing more.)

So this might confirm my hypothesis that it'll be in the "pure gaming" segment, whilst the GTX TITAN will be a slightly different hybridized segment. Given that nVidia does allow partners to ship custom coolers on their GTX 700 series cards, there's no reason why the GTX 780 Ti couldn't come with better coolers by default. That would put it possibly even higher above the R9 290X, making the competition close if we have the R9 290X running at uber and a stock GTX 780 Ti (not overclocked). But I sure hope nVidia prices this competitively, because an 899$ GPU that performs slightly better than the GTX TITAN still makes my wallet itch for the R9 290 Non-X.

- FLEX: nVidia releases (wait for it...) more of the same in a shiny new marketing package!

I know this sounds very fanboy-ish of me. And I admit, I do like AMD a lot, but I prefer the guys with the better product and the better solution.

Developers HATE CUDA. It's why Adobe and Sony Vegas now use OpenCL instead. GameDevs hate PhysX and APEX; their game list is VERY short, if compared to the Steam Library or even the GOG.com library of games just from 2012 or 2013 (even if only considering well-known games from large companies).

Making proprietary solutions just sucks. Your customers get bad experiences on other hardware, and that creates a negative memory association with your game franchise, publisher name and product branding. Why would I want to buy a game optimized to run their in-game Physics only with nVidia cards, if I have an AMD card? I'll just not buy the game, because I'll have a sucky experience compared to the other gamers out there.

nVidia should have learned with their CUDA-sized mistake, or their PhysX disaster. They didn't - not even in the slightest. They just bundled all the naming into one gigantic package, and are now offering it to GameDevs just so GameDevs can't sell their game to as many customers or deliver as good of a gaming experience to all their customers? Sounds dirty, and it is. GameDevs are wise to stay away from this if it's proprietary; and given nVidia's history, you can count on it!

* The same goes for that "Logan" face rendering demo. Sure, it looks awesome, but if it's proprietary it'll suck for the same reasons FLEX will suck and PhysX already does.

_______

It seems nVidia has created some good solutions (GTX 780 Ti and G-Sync), but it seems for G-Sync they want too much because they came up with the first working solution. The GTX 780 Ti will be nice, but if it costs 899$ I don't think it'll sell very much if the R9 290X sells at 699$. nVidia will have to release a price-drop for their higher-end cards sometime soon, and their game bundles are looking better... but their game bundles still suck compared to AMD's.

And when it comes to FLEX... well, it seems that in some cases you just can't teach some old companies new tricks.

If losing Apple and many core-productivity softwares from using CUDA acceleration didn't teach nVidia that "Propriterary = Uninteresting" for consumers and developers, than you can bet FLEX won't teach 'em anything. nVidia may have good engineers, but it definitely needs some better decision-makers for these kinds of issues.

As long as this kind of anti-competitive attitude keeps creeping up, so nVidia can lock software companies and game publishers/developers into remaining loyal to them, nVidia is only pushing away those companies, publishers and developers who haven't made up their mind, and it's making those who are loyal to them consider whether or not a good partner would "lock them in" or just offer a better product for a lower price.

AMD got that part right, and it's profitable now for the first time in a very long while. If nVidia wants to be a better company, it needs to learn from AMD, because if nVidia wanted AMD to slowly go away and lose money by the quarter, they've been doing something wrong - and it has to do with corporate policy, *NOT* their engineers of software developers. nVidia needs to spend a good, long time rethinking their ways, or AMD may turn the tables very soon. If so, a one-sided GPU War will only make companies and consumers lose. Only a closely-matched GPU War allows everyone to win.

So let's hope for all of our sakes that nVidia gets a good kicking from AMD until they learn their lesson, and hopefully we can all benefit from this. Otherwise, if one side massacres the other, we'll just see an "Intel-like stagnation" in product development, where the focus isn't on a better-performing product but a more convincing marketing scheme to fool those who don't know any better. A GPU War in which one side has won is a loss for all gamers. C'mon nVidia, stiff upper lip!

Good read.

Thanks. =) I try to make my article/blog posts interesting.

I think John Carmack is liking the "G-Sync" technology, because then he doesn't have to work as hard on a game to make it a smoother experience. I know that G-Sync looks good, but if it's going to add $50-150 onto a monitor price AND you have to have an nVidia GPU, I'm out. TBH I think the monitor manufacturers will see this technology and think that they would like to get the license on it and then charge around $20-40 more and be happy with that. That would make more sense to me, but I don't usually buy monitors that often to be concerned with this. I think this is just another gimmic to keep nVidia GPU buyers away from a possible AMD purchase. I could be wrong, but this G-Sync idea sounds like one of the best ideas I've seen in awhile. If it is implemented correctly and at the right price, this could become a defacto standard with nVidia selling the licensing for it, but this would only work if it was doable with all GPU manufactures, including integrated graphics from Intel.

It will all be worth it eventually when Volta comes out.

Volta? do tell more.

I personally feel kinda suspicious on those three's opinions because they are at the NVidia event (When they talked about AMD mantle, they all gave negative feedback. I think that was out of pressure from NVidia). John Carmack did seem slightly more genuine than the other two.

I agree, it should be a free perk to add value to the NVidia eco-system. Paying for it doesn't add any value when comparing AMD verses.

Agreed. +1 for you, sir.

Stacked dram on the chip with theoretical bandwidths of more than one terahertz and separate arm CPUs on the die manufactured at 14nm.