Nvidia RTX 20XX Thread

Not sure i understand part where radeonrays is not comparable to rtx; Those 2 are completely different, and accomplish different things. The radeon rays *sdk, is for rendering whole scene and appling sequentially draw buffer as light paths are computed; while RTX is closed tech, its hard to tell what its doing. But it looks like raytracing is not applied to all objects, shadows reflections and refraction. (judging from demo’s nv shown.)

I’m not even sure how they came up with their Gigarays/s shtick; as that depends highly on scene unless as i mentioned its limited to certain lights, objects and shadows only - not whole scene.

// i have a feeling they also use their ‘ai’ software for picture reconstruction to merge rasterized render with raytraced rays, so it looks seamless. (and most likely its using frustum tracing.)

1 Like

All Radeon Rays does is calculate ray/geometry intersections. The programmer sets the ray’s starting position, direction, length, etc… Radeon Rays then calculates the position where each ray intersects a mesh. It does does not however do any rendering. The programmer can use it to simulate light, sound, physics, occlusion or whatever else they want. RTX is similar. From what I could glean from the planned Vulkan extensions (http://on-demand.gputechconf.com/gtc/2018/presentation/s8521-advanced-graphics-extensions-for-vulkan.pdf) all RTX does is intersecting. It is then up to the shader (read “programmer”) to do with the intersections whatever they want. This makes RTX, Radeon Rays and Embree quite similar. (OptiX provides some rendering functionality as well.)

Indeed. There is a paper on OptiX though if you’re interested. They are likely very similar in their implementation. My guess is that they’ll even be merged at some point.

http://raytracing-docs.nvidia.com/optix/whitepaper/nvidia_optix_TOG_v29_n4.pdf

Somebody at nvidia said “The bigger the better”.

Yes, they use AI to denoise the raytraced image. “Merging” the rasterized and raytraced part is likely just done by summing the pixel values.

Here’s nvidia’s video on the denoiser: http://on-demand.gputechconf.com/siggraph/2017/video/sig1754-martin-karl-lefrancois-train-your-own-denoiser.html

3 Likes

Why does the entire line-up of 2000s have usb type-c? What is the point of a usb port on a gpu?

https://www.anandtech.com/show/13268/custom-geforce-rtx-2080-quick-look

I could only guess VR. /shrug

yeah so we pretty much agree. In code for radeonrays it specifies to event->wait() after it finishes calculating a ray pass implying it waits for all rays in first pass to be calculated first before next pass is calculated, ~ in terms of render i meant the amd’s sdk they supplied with renderer already compiled which renders whole scene using raytracing render rather than rasterization or mix.

If someone says raytracing, as old 3dsmax user i expect the whole scene to be rendered using raytracing rather than certain objects/lights/shadows. (on the bf demo its clear they only use raytracing on particle effects, and save up performance on all reflective surfaces by using vortex shading combination with lightmap.

I kinda laughed hard when they shown player’s model being mirrored on the window; stating its raytraced… as light reflected in on the gun model wasn’t shown but only external ‘server’ model was shown instead. Thus render was likely just a vortex lightmap instead - as if raytrace render was used rays would represent players model scene instead. (in terms of the reflective water on the ground, black desert ultra graphics does something similar and its still just vortex shading.)

this is good example.

1 Like

Looks like screen space reflections to me. The game walks along the rendered frame pixel by pixel until it finds an intersection, then just copies that pixel’s color. While this technically constitutes ray marching it’s very different than what RTX does. Screen space reflections also obviously cannot reflect anything that’s not on the screen, while true raytracing can reflect everything.

I’ve written a bit about this in my The case for raytracing [Not Tom’s Hardware Edition] thread.

1 Like

i agree, and i don’t argue about that (ssr is accomplished by using vertex shader projection * model view matrix * vertex position you can add a light there diffuse or lightmap and obviously filters and other nice effects coming in with unified shaders for ambinet mapping) -

Its just my impression of the demo stating ‘look at all the light being reflected and calculated…’ and at times they stopped to tell everyone that this is raytraced while obviously isn’t (as in another frame with rtx off, they walk and the effect is still present.)

image

1 Like

I’m rewatching the demo right now as I’m not sure what exactly you are referring to. It seems what confuses you is that nvidia only calculates a single bounce of light. Keep in mind that the photorealistic offline renderers use path tracing specifically, not just ray tracing tracing. Every path tracer is a ray tracer, but not every ray tracer a path tracer.


Unrelated to the raytracing I believe you are conflating a couple things. SSR is purely a post processing effect so it doesn’t have access to neither the vertex position nor model view matrix. Lightmaps don’t work for reflections either. The reflections are likely all SSR + environment mapping as fallback.

1 Like

in terms of SSR you actually compile vertex shader to do it. (look up code.)
here’s small example of SSR shader-- https://pastebin.com/EkHhfQA1

There are plenty of hacks you actually use to optimize some reflection shaders, as you want to save on performance. So you implement lightmap with specularity, objects, skybox and further away or more complicated objects; and you pretty much compute only particle effects and changing models while other stay in ‘ready’ state.

lightmap in itself doesn’t need to be static image like in its first implementation in quake (it could be 3d scene or object - and viewpoint is calculated by vertex shader and used for SSR.) – if you wanted to play around with technicalities you could say SSR, ambinet occlusion, global illumnation any many more are just lightmaps or vertex shaders hehe :wink:

Yes of course, because there’s no other way to render in OpenGL. But the vertex is just a flat Rectangle and the Vertex position thus only the 2D coordinate on the screen. It’s really just calling the shader on every pixel in the framebuffer.

Notice how the code you posted gets the position from a texture. The vertex position is meaningless:

uniform sampler2D gPosition;

Well, kind of. Reflections are in part done with cube maps, environment mapping, reflection probes or whatever you want to call them. Light maps don’t work however, because they look the same, no matter from which direction you look. This makes them unsuited for reflections because these obviously depend on the camera position.

You could calculate the light map in real time, but that defeats the whole purpose of using them in the first place. You’re better off using something like Spherical Harmonics / Spherical Gaussians and updating htose.

Object complexity doesn’t matter either, but we’re getting away from the original topic :smile:

ikr

just wanted to let you know you can still do lightmaps in real time but without taking all objects into account (thats how its done today - in most cases with SSR, or reflective objects, i have lunch at the moment so maybe after work i can use some game to show it as example – maybe first farcry with env bump mapping water would be best example of that).

1 Like

Looking forward to it. Let’s start a new thread though :rofl:

Taking guesses here:
Displayport and HDMI both support beeing shot over by USB-C. Having a single connector to plug in your VR-Heatset would be pretty neat.
So probably that?

1 Like

What do you guys think this means for the 1000 series cards? Any clue as to when and if those prices will come down further?

I agree with many above, these prices are too high for me to be interested. Don’t really care about bleeding edge stuff

Nvidia & AIB’s still have tons of 1000 series chips to sell. And they’ll trickle their way through the market.

Expect some price cuts and people flocking to get those 1080’s they previously couldn’t afford.

1 Like

Historically, the previous Gen Nvidia prices do not drop much, people thought they would with both the 700 and 900 series, but it never happened.

1 Like

Bleh. Time for a new hobby. This shit is getting stale

2 Likes

USB-C is the new displayport connector. It’s just USB, type C does USB, Thunderbolt AND DISPLAYPORT.

I may have liked PhysX but sorry I am not going to pay more for something that gives a more cinematic experience. In fact in all honesty until I bought a GTX 1080 Ti I never bought an expensive card and this did not pay more for proprietary stuff. Particle effects matter to me. A game looking more cinematic does not matter to me and definitely not at the prices they are stating. Also they announced prices and then change them a day later. Just sickening.

Exactly.

If they’re already cheaper than the new cards (they are) then why would they drop in price?

Also, previous generation cards won’t be on sale for very long in the new card market.