Nvidia RTX 20XX Thread

Also, let’s not forget M$'s planned obsolescence at work.

They told Nvidia to not release drivers for any version of Windows other than Windows 10 1803, same as they told Adobe not to make Creative Cloud for anything other than 1803.

If you’re on Windows 7, 8 or 8.1, you’re screwed, until you switch to Linux.

not to derail the thread but if you’re getting an RTX card and running any of these versions of windows please see a professional therapist.

3 Likes

sense epic reply forthcoming… from Furry :smiley:

8.1 still has a modern and decent API stack compared to 10. 1080 Ti on 8.1 can still hold it’s own.

If I was ever forced to do Windows 10, it would have to be containerized in a GPU Passthrough VM behind a pfsense firewall. Waiting for all the Threadripper bugs to be ironed out.

Failing that, I’m skipping this RTX generation, and waiting for the next gen which will most likely be when DXVK and VKD3D has matured enough to replace Windows.

1 Like

Let down :frowning:

1 Like

I don’t see a reason to get RTX as well but if anyone has the mind set of i want old windows and wants the newest tech? I feel like that point is a hypothetical one at best it’s not a real use case scenario. If you’re paying the whopping dollar amount for RTX 100% chance you’ve already jumped (or are planning to) to windows 10 for DX12 and other gaming optomizations.

1 Like

Some people would hope more for Vulkan optimizations, and if anyone did make that mistake, Linux and DXVK/VKD3D and Vulkan have that covered. DXVK 0.72 has come a long way. In fact, it can play Monster Hunter World.

I think Vulkan is cool too but DX, Windows, and gaming are just tied together. mostly out of convention more than anything else but that’s still the reality.

1 Like

Which is why GPU Passthrough exists. If GPU Passthrough didn’t exist, there would be no fallback solution if you made the big move to Linux and had to run something that doesn’t play well with Wine.

If @wendell gets RTX, I really would like to know if Nvidia didn’t add anymore blocking “code 43” code for KVM.

Performance limitations of a lot of games is due to the language the game is using. Since a lot of companies have moved to C# over C++, the latency between render calls has increased. when you have frame timings for example of 30ms, the render calls between two languages is small but significant. It obviously also depends on the quality of the code but generally, the difference between compiled languages like RUST/C++ and code interpreted languages like C# and Java is 5-15ms. Though it is exponential based on the content being rendered and the quality of the code in the interpreted languages.

Vulkan does help in some respects but it’s a nightmare to actually develop for. Only the big boys can afford the cost of devs developing for vulkan

Does anyone know if Nvidia is forcing devs to develop Ray Tracing for their API or do they also support the standard DX12 Ray Tracing API?

This is why you buy an engine. “Only the big boys” have been able to write a competent 3d engine for years now, this is not new.

This is why you license unreal, unity, idtech X or whatever - which the big boys port to Vulkan once and then every game using it benefits.

Few people write directly for the hardware (or even the low level API) these days I thought?

Unity just exited beta for Vulkan in their latest Unity 2018 update.

Vulkan supporting Unreal Engine 4 has input latency issues that generally makes in unusable for a large proportion of games without some issues. This leads most devs who are worried about input latency to find a different engine. Unity is the only other publicly available engine, accessible to everyone from indie to triple A (Though there is good reason why it’s not used in triple A games) that has Vulkan support. The issue is that Unity relies on C# and other interpreted scripting languages for the game code that negates any performance improvement you might see by using Vulkan.

This leaves devs with three options. Don’t use Vulkan, Adopt an engine where the source code is available to be modified and then build Vulkan support into it, or build up a new engine that supports Vulkan.

Generally, Large studio’s use in house engines. Whether it’s an engine built by themselves or another studio under the same publisher.
medium studio’s depend on the type of game. Some build their own engines, other’s use existing ones. Small studios generally use existing engines. Tiny indie teams (1-3 people) are again a mixture. Some use existing engines, others build their own.

Building an engine isn’t actually that hard when used for a specific genre. It’s accessible to any dev team with an experienced developer. The issue comes where you need to build tools and product pipelines to make building the game accessible to designers.

Just heard from the GN livestream that @wendell has deep learning benchmarks potentially for RTX. :open_mouth:

Also:

AFAIK this does also support DX12s baked in ray tracing.

Though nVidia hasn’t said much else so we don’t know any other potential limitations of going with that rather than their RTX API

Yea, I kinda assumed that it should support it though I’m not familiar with how the DX implementation works. I do know however the RXT API is a hybrid of ray tracing and rasterization. If the DX12 version is strictly just RayTracing, it makes the RTX API a viable option to go down however if the DX version supports a similar hybrid approach, it kind of makes the Nvidia implementation irrelevant.

I do know professional rendering tools already make use of the RTX cores on these new cards excluding Blender because of course Blender breaks with an RTX card.

I’m a bit unclear on that myself. I’m reading the post discussing it on the Microsoft blog. It seems to hint at hybrid rendering with a push towards full ray tracing later as technology catches up and we have the hardware to do it.

So I’m wondering how nVidia’s solution will work in here.

…but but but muh open sauce

In theory if the DX pipeline allows for custom development of ray tracing implementation, of which it should, then as long as there is no serious performance delta variants between the two, the RTX API is NULL and void. Knowing Nvidia though, their implementation of the DX12 will be sub par for the first year or so so that they can try to dominate the ray tracing segment while AMD designs a competitor.

1 Like

Yeah that’s why I am a bit confused as well. We shall see what nVidia plans. I’m guessing it won’t be particularly developer/consumer friendly…

This is actually the one time when I think AMD might be able to come up with a competitive solution people actually use…