RTX Ray tracing

Hey, nice profile pic :slight_smile:

1 Like

I think I said like a year ago that the next uarch was gonna be “fermi but for machine learning”

prolly gonna be closer to 5-10% bump tho

also the mining buyers have gone quiet the last 3 or 4 months, so expect these to go quick once a cuda guy figures out what those new tensor cores can do

Stream is live and it looks like the 2080/2080Ti are going to be pretty sweet.

Jensen showed a 4k Demo that allegedly runs on a 1080Ti about 40 FPS the 2080TI around 80FPS…

I’ll wait for 3rd party reviews.

1 Like

im going to preorder a 2080 just for the sake of it

but yeah, some sweet sweet benchmarks and use cases and such are going to be very helpful

1 Like

my guess is that the demo heavily uses the raytracing feature in nvidia’s middleware to exacerbate that gap as much as possible.

A die shrink can’t change things that drastically if you devote half of that die to workstation, ML and dataviz specific asics, hence the reason for pushing for raytracing in realtime renderers all of a sudden.

“I love Gigarays.” - Nvidia Leather Jacket Guy

But I don’t think that the die has shrunk, from what I have seen the die is actually bigger because of all the gigarays and tera ops that are now living in the die.

To clarify: die shrink is a shorthand for moving to a smaller process. You can end up with a larger die on a smaller process if you pack in more litho features. The idea is that the exact same die would be smaller.

pascal = 16nm TSMC, Volta = 12nm, so it’s a die shrink

so bigger die, process shrink. fine with me. but i guess thats one of the reasons even the FE cards have actual fans on them… i mean the FE of the 1080s were kinda bad tbh.

I may upgrade if the cuda toolkit keeps up in a timely fashion and I can use those specific purpose asic features properly on rendering and ML, which I do quite often.

That is one really nice element of CUDA that gets overlooked – you get an abstraction layer for all sorts of nice stuff that OCL can’t implement because it’s a standard.

waitforit

5 Likes

waiting for benchmarks intensifies


Won’t help my Kavari set-up run faster but my DSL will take a week to update my games

1 Like

Hey, doesn’t that sound familiar?

My bet is that the RTX FX will just be used for Ambient Occlusion / Global Illumination effects.
That’s what a lot of demo raytracing focuses on, and what RTX is already good at.

2 Likes

cuda/cudnn offer abstraction layers that give you a pretty good amount of control over these new hardware features, but yeah i’d bet that it’s just going to be an extreme version of previous vendor middleware

Think Nvidia Gameworks, except even more detrimental effect on old nvidia and non-nvidia hardware.

my guess is that it’ll probably just accelerate the pre-existing volumetrics and lighting effects, if it wasn’t easy to port they wouldn’t be getting prelaunch support for it.

those middlewares are essentially just fat cuda plugins and cuda abstracts that to run adequately based on what hardware features the card supports.

1 Like

I think I saw 2080ti got 80fps whereas 1080ti got 40fps… $1200 damm

1 Like

But wait, according to Nvidia fanboys, rapid packed math is a waste of time and no use?