Also, let’s not forget M$'s planned obsolescence at work.
They told Nvidia to not release drivers for any version of Windows other than Windows 10 1803, same as they told Adobe not to make Creative Cloud for anything other than 1803.
If you’re on Windows 7, 8 or 8.1, you’re screwed, until you switch to Linux.
8.1 still has a modern and decent API stack compared to 10. 1080 Ti on 8.1 can still hold it’s own.
If I was ever forced to do Windows 10, it would have to be containerized in a GPU Passthrough VM behind a pfsense firewall. Waiting for all the Threadripper bugs to be ironed out.
Failing that, I’m skipping this RTX generation, and waiting for the next gen which will most likely be when DXVK and VKD3D has matured enough to replace Windows.
I don’t see a reason to get RTX as well but if anyone has the mind set of i want old windows and wants the newest tech? I feel like that point is a hypothetical one at best it’s not a real use case scenario. If you’re paying the whopping dollar amount for RTX 100% chance you’ve already jumped (or are planning to) to windows 10 for DX12 and other gaming optomizations.
Some people would hope more for Vulkan optimizations, and if anyone did make that mistake, Linux and DXVK/VKD3D and Vulkan have that covered. DXVK 0.72 has come a long way. In fact, it can play Monster Hunter World.
I think Vulkan is cool too but DX, Windows, and gaming are just tied together. mostly out of convention more than anything else but that’s still the reality.
Which is why GPU Passthrough exists. If GPU Passthrough didn’t exist, there would be no fallback solution if you made the big move to Linux and had to run something that doesn’t play well with Wine.
If @wendell gets RTX, I really would like to know if Nvidia didn’t add anymore blocking “code 43” code for KVM.
Performance limitations of a lot of games is due to the language the game is using. Since a lot of companies have moved to C# over C++, the latency between render calls has increased. when you have frame timings for example of 30ms, the render calls between two languages is small but significant. It obviously also depends on the quality of the code but generally, the difference between compiled languages like RUST/C++ and code interpreted languages like C# and Java is 5-15ms. Though it is exponential based on the content being rendered and the quality of the code in the interpreted languages.
Vulkan does help in some respects but it’s a nightmare to actually develop for. Only the big boys can afford the cost of devs developing for vulkan
Vulkan supporting Unreal Engine 4 has input latency issues that generally makes in unusable for a large proportion of games without some issues. This leads most devs who are worried about input latency to find a different engine. Unity is the only other publicly available engine, accessible to everyone from indie to triple A (Though there is good reason why it’s not used in triple A games) that has Vulkan support. The issue is that Unity relies on C# and other interpreted scripting languages for the game code that negates any performance improvement you might see by using Vulkan.
This leaves devs with three options. Don’t use Vulkan, Adopt an engine where the source code is available to be modified and then build Vulkan support into it, or build up a new engine that supports Vulkan.
Generally, Large studio’s use in house engines. Whether it’s an engine built by themselves or another studio under the same publisher.
medium studio’s depend on the type of game. Some build their own engines, other’s use existing ones. Small studios generally use existing engines. Tiny indie teams (1-3 people) are again a mixture. Some use existing engines, others build their own.
Building an engine isn’t actually that hard when used for a specific genre. It’s accessible to any dev team with an experienced developer. The issue comes where you need to build tools and product pipelines to make building the game accessible to designers.
Yea, I kinda assumed that it should support it though I’m not familiar with how the DX implementation works. I do know however the RXT API is a hybrid of ray tracing and rasterization. If the DX12 version is strictly just RayTracing, it makes the RTX API a viable option to go down however if the DX version supports a similar hybrid approach, it kind of makes the Nvidia implementation irrelevant.
I do know professional rendering tools already make use of the RTX cores on these new cards excluding Blender because of course Blender breaks with an RTX card.
I’m a bit unclear on that myself. I’m reading the post discussing it on the Microsoft blog. It seems to hint at hybrid rendering with a push towards full ray tracing later as technology catches up and we have the hardware to do it.
So I’m wondering how nVidia’s solution will work in here.
In theory if the DX pipeline allows for custom development of ray tracing implementation, of which it should, then as long as there is no serious performance delta variants between the two, the RTX API is NULL and void. Knowing Nvidia though, their implementation of the DX12 will be sub par for the first year or so so that they can try to dominate the ray tracing segment while AMD designs a competitor.