Why doesn’t Ray tracing work on older GPU’s?
it does, what nv tries to sell is native real-time pipe that renders with ray tracing.
Nvidia’s realtime raytracing doesn’t work on old GPUs because it relies on special raytracing and tensor cores. It is still possible to do raytracing in software, it’s just much slower.
It seems to be some sort of ray tracing to a point and AI de-noising the image.
My main interest is that Microsoft has made a realtime ray tracing API for DirectX. I do ‘Real but not Realtime’ raytracing of 3D models in CAD and with Keyshot. Those programs render everything in software through the CPU. Much like Cinebench. Sometimes the preview in CAD by the GPU is almost as good as a raytracing. It seems like Nvidia is focusing on that aspect of rendering in realtime.
When raytracing software like Keyshot can leverage a Geforce RTX through DirectX I hope to see a massive decrease in rendering times. Right now even simple scenes can take about 20 minutes to render on my good old i5 CPU. If I can upgrade to a RTX 2080 GPU instead of upgrading to whole new PC with a Ryzen CPU, I am fine with that because my i5 suits my needs for most other things.
The new Turing arch has dedicated cores for Ray Tracing, RT Cores. This in addition to the Tensor cores. Three types of cores. And the olde cores gets an update with rapid packed math for both FP and INT. Anandtech has a brief:
Short answer - because it is hardware ray tracing. They have built the GPU to do it. They haven’t built the old ones with that in mind, so they can’t.
Quadro RTX announcement
Here is the full Nvidia GTC 2018 presentation on Vulkan RTX integration:
AMD Radeon Rays
Discussion on Vulkan GitHub Project:
News Vulkan Ray Tracing
Radeon Rays 2.0 SDK
Lesser known Technologies
PowerVR Wizard GPU’s that accelerate realtime Raytracing
Nvidia RTX Documents and resources
Unreal Engine 4 supports RTX
Sorry for the info dump
Hope it’s useful to everyone. Let me know if there’s something important I missed and post it here.
YCombinator discussion with lots of resources buried somewhere in there:
Realistically speaking for the average consumer, In terms of compute AMD Vega 64 is not far off from the Nvidia RTX Cards.
We just have a software problem, everyone is primarily using cuda and few are using ROCm yet.
The only thing Vega doesn’t have is the Tensor ALU’s that are being used for much of the Denoising, and Helping with Ray testing. For a while to come we should see quite good realtime raytracing even with AMD hardware.
Also keep in mind that nobody at this rate is going to be using realtime raytracing for a AAA game in any way. It’s early days for the technology and it’s really more aimed at content production such as movie rendering, and asset creation.
Realtime raytracing for gaming is still at the marketing mumbo jumbo stage.
I’m just recalling… the vram thing and a couple others.
We not only have our first confirmation that the GeForce RTX 2080 TI exists but we also have the first look at its specifications which seem to be following previous rumors. The card features the latest Turing GPU architecture with 11 GB of GDDR6 memory while the RTX 2080 Gaming X comes with 8 GB of GDDR6 memory. The memory size is kept the same as previous generation but it’s a lot faster in terms of speed, delivering much higher bandwidth for high-resolution and VR gaming.
I predict the following with the release of the RTX cards…
Gamers that have been dying to upgrade for the past two years will buy them thinking they’ll get next gen performance but will be disappointed once they see they only got about 4-8% bump in gaming performance from last gen GTX cards.
I’m pretty sure that Nvidia would totally do that but from the rumored specs that is highly unlikely. But until the cards are out and we have benchmarks, it’s all a guess so tinfoil hat needed
I wonder if it’ll change the look of the games that we play at all?
Likely only if the game is already programmed/setup for ray tracing