Return to

NVIDIA To Officially Support VESA Adaptive Sync (FreeSync) Under “G-Sync Compatible” Branding



Nvidia will still sell the modules for awhile, but over the next year or so there won’t be anyone buying them. And next year at CES nobody will announce any new hardware with the g-sync module.


It looks like all of the 12 that Nvidia approved are not FreeSync 2 and therefore have no HDR support. I wonder if this is specifically to avoid cannibalizing their “G-Sync Ultimate” line?

Or do monitor companies just refuse to pay for Nvidia’s certification if they already have FreeSync 2 certification?


I would disagree on that, anything that AMD does in terms of open source technology gets quickly adopted natively by the developers or manufacturers and improve the technology and make it their own. A perfect example is TressFX which is a staple in Square Enix games. The same can be said for, True Audio, that evolved into 3d directional audio that some engines do incorporate. This is the advantage of opensource, developers quickly adopt and improve it on their own without the limitations of hardware specific optimizations.

To further my point, do you think DX12 or Vulkan on PC would be even a thing today if AMD made Mantle proprietary? I don’t think so.


What in the world are you talking about? Mantle was proprietary, only available on AMD drivers and recent hardware.

Once it was clearly dead, AMD later donated the API to Khronos for Vulkan. But that was after the fact, when DX12 and the openGL extensions were already clearly winners.

It’s fair to say that AMD’s innovation in bringing CPU offloading previously only seen on consoles to their Windows drivers triggered Microsoft and OpenGL/Khronos to build their competing and ultimately successful APIs. But that’s not because Mantle was open, it’s because it was a good idea.


Ahh, well same outcome. It became open source because they knew it would benefit the gamers in the long run. Same for their Open Works.


The only reason most companies choose between open and closed is because of market share

  • if you have the market share, go proprietary and lock it down
  • if you don’t, make it open / free

I am very glad that this is one instance where free / open seems to be kicking proprietary’s arse… even though it took a long while to build up momentum


Gamers are starting to learn proprietary has a premium cost, that’s why the momentum is building up. Shit’s getting expensive, you think raytracing will hide behind Nvidia’s walled garden for long? That one’s going away too just like gsync’s deflation.


At least 3 years yes… which by then all current raytracing hardware will be obsolete

…Navi could surprise absolutely everyone and have it but it’s unlikely… in which case we will be waiting for the gen AFTER ‘next gen’ on AMD’s side of things

BUT by then nvidia (and games devs implementing raytracing) will have a rather large advantage through maturity and hardware tweaks along the way for their next generations of products.


I don’t think so, AMD has more clout in gaming than Nvidia, consoles run on AMD hardware, they have every vested interest in pushing their own raytracing too and not wait 3yrs, they may not even have to have a dedicated die on their chip for that, they’re async architecture may even handle their own type of raytracing that’s not hidden behind a black box like Gameworks is.


A lot of maybe there

Consoles aside, NVidia has a strangle hold on pc at the moment, which has far higher margins than console.

God knows how many ps4 sales it would take to match the profit level in a single 2080…

Please don’t get me wrong, I want you to be right , I have often wondered why GCN’s extra compute couldnt be used to similar effect.

Game development costs / time would shoot massively down (on console games at least).

no more baking of scene lighting or setting up reflection maps etc and the console would just be brute forcing all the stuff that normally costs game studio’s big bucks.

… as an aside, in most industries when this happens people often lose jobs as a result :smiley:

I think we are getting way of track though


Well, raytracing, hardware aside, is still in its infancy, but as new efficient forms of raytracing becomes available hardware dependency wont factor that much.

This guy talks all about the current implementation of it and where its going.


Dude :smiley:

I use Radeon Pro render in cinema4d on my Vega64, well aware of its usage and the current limitations around speed / quality, also seen that video before

Not ready for prime time

By prime time… I mean gaming, sorry for not being clear.


You’re working on pre-rendered stuff, this video talks about real-time rendering. Two completely different beast.




REAL TIME rendering within my viewport within cinema4d done through raytracing.


Learning a lot in this discussion, despite my lurking, know that it is appreciated @Giulianno_D @flazza @Ruffalo


Don’t let this go off topic for a third time.

On topic: The confusion Nvidia is introducing with this is annoying, heaving to explain the difference between a module and Vesa g-sync monitor ain’t gonna be great.

Regardless I’m a happy camper, I got a Radeon 570 placeholder card along with a [email protected] freesync monitor hoping to upgrade to Navi in early 2019.
At least now I have the option to pick Nvidia as well, should they release some sweet 7nm GPUs.


Knowing AMD, some add-in card that then does Raytracing would be more likely.


AMD cards do very well in compute, and Raytracing is just raw compute. So perhaps?


That statement doesn’t stand up to a bit of thought. If that was the case, why would Nvidia build dedicated RT cores rather than additional CUDA cores useful for general compute, scientific visualizations, machine learning, etc? Answer is they wouldn’t.


Nvidia made purpose built cards for the current API of choice.
For example the 9 and 10 series are pure DX11 pixel pushers.
And while CUDA can do certain ops, GPUs remain limited in their instruction set, allthough they can solve simple problems quickly.

Nvidias approach to raytracing is purpose built hardware (again), it is a more or less fixed pipeline that does a lot of power, multiplication and addition tasks (Optical ray tracing does not need much more).
The result is a rather piece of hardware that does a more or less fixed number of rays in a given time (20ms + 16ms for traditional graphics, maybe synchronous increasing power draw).

In other words, Nvidia took their existing GPUs, put an external instruction bus on the instruction decoder and feeds that into their raytracing hardware. Wich is a very sensible approach to avoid redundant hardware or offloading tasks to the CPU (via driver).

Small overview of optical raytracing (not real time) Link

Nvidia OptiX Whitepaper

Why would anyone use a GPU when a CPU can do graphics in software? You just need a “PCIe to HDMI”-adapter and are good to go


I think it’s totally not out of possible realm. PC gamers seem to underestimate low-overhead API coding which async architecture is really goooooood at. Look at the Xbox One X for example with the native 4k games it’s able to output with a weak CPU. Look at how taxing Hairworks was for Nvidia’s own hardware compared to AMD’s TressFX for it’s own hardware. We could very well see a similar approach to raytracing. I think the better approach is going to be a combination of CPU/GPU async load, not a dedicated GPU load, which AMD’s already been mastering since the first GCN architecture.