Ray-tracing and the Heterogeneous SoC is amazingly interesting engineering-wise. The use of NVLink as well…
But all their metrics shown here are BS. There is literally no frame of reference so they basically mean nothing. Also i wonder why they added tensor-flow cores. I wonder how could those be used on a consumer setting. It could be that these are meant for small workstations and enthusiasts that do not respect their money and the more consumer friendly cards like the 2060 will be different.
Pricing is absurd, 12nm is basically 14nm++, minor tweaks and larger reticule limit, at best its a half-way node… Its basically mature 14nm… At this stage last node we had the 980ti release for 650USD at 600sq mm, the 10xx series saw prices rises substantially and even moreso this time…
If we extrapolate the 980ti up to the size of the RX2080ti then pricing should be as previously rumored $800 which frankly is not that bad given what it would replace the 1080ti… But no it MSRPs for 1,000USD, but we all know that the Founders price will be the new MSRP, so realistically they are just upping the price 50% for 50% more die area…
This is unbelievable, that they would release a new series on the same node, make all the dies bigger, scale up the price, and effectively give no perf/$$$ gains. Worse still because some of the die is now Raytracing HW and Tensor Cores overall performance per dollar might actually be worse!
And we all know with 7nm basically ready (even if yields rumored to be dog poop ATM) that in 1yr time they will come out with a 7nm series… So even though they know they will put out a new gen next year, they want to charge these absurd prices!
Yeah the flagship is 25% larger than the GTX 980ti and almost 2x the price if the real MSRP is the 1200USD ‘Founders Edition’ and not the 1,000USD ‘pretend MSRP’… That is a 47% hike in pricing per sq mm over 2 generations of GPUs…
Or a 21.5% increase per year. Based on this next gens flagship 750sq mm part could be 1500USD… Or about 1.3K for a smaller die roughly the size of the aforementioned 980ti (600sq mm). When previously that would be a $600 part…
Whats next, charging you money for advanced support to ‘fund the development of future features’ or some marketing BS that is code for give us $3-5 a month to access the drivers and whatnot needed to get the most out of your cards…
Think about it, the average person pays $200ish for their cards, nvidia is only making maybe 50% out of that? If that? Over a 24 month release cycle at $5 a month they could extract another 120USD pure profit and at least double their profit…
Pay $700 current flagship (75% the size of flagship before it and $100 more)
Pay $1200 this coming flagship
Pay $1500 next years flagship, +$5-10 a month for S/W Updates? 1620-1740USD total… Really???
They’re ‘mixed mode’ - Hybrid renderers.
Rasterization for the heavy lifting and raytracing for the ‘extra shiny’ effects
I also suspect that several of the demos are not fully realtime ‘game’ renders, particularly the reflection heavy Battlefield 5 amsterdam scene seems like a small demo setpiece, specifically made to showcase RTX.
Until this becomes a global standard across brands this is essentially hardware exclusive gameworks. If AMD were to release cards capable of ray tracing how much you wanna bet this wouldn’t work on them?
Sidenote, this shit explains why Metro got delayed, it got shoe horned in at the last second.
I’ve said since before that the Nvidia RTX middleware is Gameworks 2.0.
That said though if you have a look in this thread:
You will see that I have posted various sources for the standardized DirectX and Vulkan Raytracing API’s/Demo’s.
It should also be said that the RT and Tensor Cores are not particularly special / novel on their own in terms of how they function, the inclusion of them however together within a graphics SM is novel.
You can be sure to see similar features from AMD, Intel and PowerVR as the Machine Learning and Ray tracer wars slowly begin.
Of note: raytracing does not require these RT or Tensor cores. You can do raytracing (even realtime) on current gen hardware with Vulkan or DirectX just fine. It’s just not specially accelerated with dedicated hardware.
The way they’re treating these RTX cards like their own ecosystem leads me to believe these ray tracing render features will be made exclusive to these cards while you’re saying they will not was wondering if you had more information or if we’re both speaking purely speculation.
The hardware that enables Raytracing and the Tensor cores is not particularly special, anyone willing to make a big die GPU can implement them.
They’re also exposed via the driver in a way that Vulkan, OpenGL and DirectX will be able to use them to accelerate specific compute operations. Think of it like SSE4 or AVX in a CPU.
RTX (Or OptiX Raytracing Middleware) is just a collection of code and prebuilt effects put together by Nvidia into a library that talks to the API & the display driver. It’s meant to make it easy for developers to just drop raytracing capabilities into their existing projects.
But of course you could just write a raytracer with Vulkan API code without RTX, run it on say a Vega64 at a slower pace or then alternatively have aspects of it accelerated via the extra hardware features exposed by the driver on Nvidia GeForce RTX hardware.
In the end it all comes down to the software implementation that developers use, do they code their own raytracer code and handle all GPU’s nicely, or do they just drop in Nvidia RTX middleware which will undoubtedly favour newer Nvidia Hardware.