I'm not so sure about this.
Before we get into the whole "fanboyism" stuff, let's remember: the GTX Titan is a 4-Way SLI card, and the GTX 780 is a 3-Way SLI card. From what was rumored, the GTX 780 Ti is supposed to be 3-Way SLI as well. Note that gaming does suffer when in 4-Way SLI anyways, and only certain synthetic of compute applications would benefit from 4 Titans in SLI.
Next, let's also establish that the GK110 GPU is not fully-enabled in any GTX card from nVidia, only in the Tesla and Quadro card series.
Also worth noting is that cooling performance has increased, especially given that nVidia has OEM partners who can design coolers to keep their cards cooler.
Lastly, let's also remember that the GTX Titan has a third of it's Double Precision Floating Point processing power fully-enabled. This means it works for rendering and compute, as well as gaming. Think of the GTX Titan as something more akin to the Z87-WS by ASUS, which blurs the line between Enthusiast and Workstation motherboards. The GTX Titan is somewhere in between a "gaming" GPU and a productivity/workstation GPU. It can be thought of as a "diet Quadro" for people who work and do gaming on the same machine.
So, here's where the GTX 780 Ti comes in. It probably won't feature very much Double Precision Floating Point processing power, or if so not much more than the GTX 780 has. Next, it probably will have more cores (as many as the GTX Titan or a fully-enabled GK110 with 2880 CUDA cores).
I doubt it'll have 6GB of GDDR5. nVidia wants a large profit margin, and putting 6GB of GDDR5 would be bad right now. With the price of DRAM going up since the Hynix fab issues, putting more GDDR5 on a chip would be a bad idea for profits. Also worth noting is that Hynix produced some of the best GDDR5 chips out there, which may also be bad news for the GTX 770's 7Ghz (effective) GDDR5 memory modules, so we might have a shortage or a price hike.
Next, let's consider this: nVidia didn't want partners changing the cooler on the GTX Titan (only EVGA could, and that's to put liquid coolers on it). But that seems to be because it wanted it's product to be recognized (remember that GTX Titan logo in greed LED's everyone was talking about?), but also it seems it wanted it's product to look "professional-grade". Again, part of the "workstation+gamer blurred line" strategy.
But with the GTX 780 Ti, it seems that blurry line is going away. I think if nVidia is smart, it'll give a full GK110 for gamers to play with, but only allow 3-Way SLI, whilst also allowing OEM partners to offer non-reference PCB's, non-reference coolers, and default-overclocks that can push it way beyond the GTX Titan. The GTX Titan would be more of a "compute, productivity, rendering, benchmarking + gaming hybrid" GPU, whilst the GTX 780 Ti would be more for gaming and benchmarking, that's it.
Keeping it at 3-Way SLI means gamers get less lag, especially from those PLX chips. With alternate frame rendering, one frame is rendered on each GPU. So if your framerate is 60fps, and you're running 3-Way SLI, that means anything you do now will have to wait through 50ms to get through and be processed. That's because your framerate divided your the number multiple GPUs you have, all divided by 1000 is the number of milliseconds it takes for any input (mouse or keyboard) to be registered. So that's [ (60fps / 3) / 1000 ] = 50ms. If you thought 8ms (GTG) was bad for a monitor, imagine what SLI or CrossFire can do to your lag! (That's not including internet, monitor, mouse, keyboard, etc.)
That's lag no gamer could stand to use in a competitive setting. So why add more lag by adding a 4th card in SLI?
2-Way SLI or CrossFire is the practical limit for most competitive gamers, but generally a single more-powerful GPU is better than two less-powerful GPUs (due to multi-GPU issues, driver issues, game compatibility, lag and other reasons).
The GTX 780 Ti could be really good. What nVidia needs to do to make it competitive:
- Full GK110 (2880 CUDA Cores, or at the very least 2688) enabled by default. That's 15 or 14 SMX clusters enabled by default.
- Allow OEMs to ship non-reference PCB, non-reference coolers, and ship overclocked waaayyyy beyond standard clocks.
- Ship with reference core clock of 893Mhz (Boost of 935Mhz), with 3GB of GDDR5 clocked at 6Ghz over the 384-bit memory bandwidth.
- Price it at 699.99$ (USD).
- Include the nVidia game bundle with the card.
- 3-Way SLI maximum.
- Allow only 384 CUDA Cores to be used for Double Precision Floating Point calculations, so it can't be used as much for compute or rendering. It should be meant for gaming as it's main purpose, instead of trying to be all things at once. By disabling Double Precision, it can't be used as a workstation product, and thus won't compete with their Quadro or Kepler cards, and it won't be used for productivity (at least not much more than the GTX 780 is). Another option is that only 32 CUDA cores in every SMX cluster could run Double Precision calculations, meaning either 480 CUDA cores running Double Precision (about half that of GTX TITAN) if we're talking about 15 SMX clusters (2880 CUDA Cores) or 452 CUDA cores running Double Precision calculations if we're talking about 14 SMX clusters (2688 CUDA Cores).
- Keep the TMUs and ROPs of the GTX TITAN. Don't castrate the GTX 780 Ti like the GTX 780 was.
As for pricing of their other cards, here's what I'd do:
GTX TITAN (lower performance than GTX 780 Ti, except in Compute and 4-Way SLI) drops from 999$ to 879$ (becomes workstation/productivity with a side of gaming, and for benchmarkers)
GTX 780 Ti enters at 699$ (USD) (compete with R9 290-X)
GTX 780 drops from 649$ to 499$ (USD) (compete with R9 290 Non-X)
GTX 770 (4GB) drops from 449$ to 359$ (USD) (compete with R9 280X, but has Battlefield 4's recommended 3GB minimum of VRAM)
GTX 770 (2GB) drops from 399$ to 319$ (USD) (compete with R9 280X's price of 299$)
GTX 760 (4GB) drops from 299$ to 249$ (USD) (compete with R9 270X, and can run Battlefield 4's minimum of 3GB of VRAM - also compete with R9 280X for the price!)
GTX 760 (2GB) drops from 249$ to 219$ (USD) (compete with R9 270X, and beat the price/performance ratio of the R9 280X)
That would be a much better option. It beats AMD, and offers more value to their customers.
Of course, nVidia might not choose to this right away. It's more likely they'll wait to see what AMD does in regard to pricing first, and they'll wait for the launch of the R9 290X and R9 290 Non-X. Once that's done, they'll settle on a new price point. But given all this, it's unlikely nVidia is going to stand still with their prices.
I think nVidia will have to pull out some big guns. I don't think they'll engineer a whole new GPU, at least not yet. The GK 180 sounds like too much work for something that won't even be here in another 9 months (Q3 2014 and 22nm/20nm from TSMC, I'm looking at you).
Right now, I think this will be near the end of new GPU architectures and lineups of new cards by AMD and nVidia. They can't keep pouring money into engineering, testing and whatnot like this indefinitely. They'll probably let these cards settle in, than battle it out with pricing battles and game bundles. Afterwards, once 22nm/20nm is out, they'll launch a new generation with a new architecture, and it'll be the same thing all over again.