Intel XeSS: XMX and DX12 exclusivity at launch?

According to Tim from Hardware Unboxed, (who wrote this article) sources at Intel have told him that XeSS titles will only support Intel XMX instructions at launch, and not the DP4a instructions. Since this limits XeSS exclusively to Intel Arc GPUs, developers he believes will not be incentivized to include the technology, making it DOA.

With the launch of FSR 2.0 BEFORE XeSS in major game releases, this adds to some level of embarrassment that Intel still has nothing to show regarding the tech yet that is in a released game right now. (Tech demos upon tech demos, but not actually in the game yet, while FSR 2.0 has managed to finally be committed to public facing code is the source of this embarrassment.)

Another slide that was released from GDC says DX12 will be the initial focus of XeSS DP4a with no mention of Vulkan API anywhere.

Honestly this is not a promising start as we have both platform exclusivity and DX12 exclusivity (I bet you VKD3D devs will NOT like that it’s DX12 exclusive for non-Intel GPUs) both are biting Intel where they need this to succeed.

One, you’re going to miss an initial wave of developers who don’t want to implement a platform exclusive feature on something with almost no market share out of the gate. Like FSR 2.0 and DLSS, XeSS requires integration into the game’s rendering pipeline, unlike FSR 1.0.

Two, limiting to DX12 for cross-vendor DP4a at launch is shooting yourself in the foot when you’re also touting cross platform compatibility/open source nature. (with the only exception being Xbox if you’re talking DX12 also works on a different platform) Imagine if Unity or UE5 for Linux want to implement XeSS, but CAN’T for anything other than XMX instructions because no Vulkan support for DP4a from Intel. Vulkan supports DP4a, but it’s up to Intel to make the XeSS API support it, and even detect whether VKD3D is present in the worst case scenario.

I honestly think the first version of XeSS is going to flop, but XeSS 2.0 will likely learn from these mistakes and finally implement DP4a.

1 Like

XeSS a Intel Arc GPU exclusive like DLSS or does it work on all hardware?

At launch XeSS will be Intel exclusive. The version that works on other GPUs was delayed indefinitely.

Given FSR 2.0 is on par with dlss and is open - why would anyone bother with an intel only solution when they’re basically sub 10% of the 3d gaming market?

Will be interesting to see how much “financial horsepower” intel throw at game devs for this.

Exactly Tim’s point. They shot themselves in the foot for initial launch support.

1 Like

I think that anyone who thinks that Intel care about market share and the uptake of XeSS among gamers were fooling themselves. Arc exists because Intel have been losing market share in the datacentre to Nvidia for years with tasks that used to be done by big expensive Xeon CPUs have been moving to GPUs since Fermi, and XeSS is just meant to showcase the machine learning capabilities of the card (as was dlss). I remember when Volta first came out the narrative among the punditry was that the tensor cores wouldn’t even make it to consumer GPUs at all, they would only stay enabled on Quadro and Tesla cards (and possibly Titan).

At the end of the day the only company that relies on gamers as far as GPUs go is AMD, and that’s because they have utterly failed when it comes took getting their compute utilised by industry. Every big professional workload demands CUDA. this is the hurdle Intel will be focusing on, not getting better upscaling for gamers.

With that said given that every new iGPU is going to support XeSS I don’t think the market share is going to be as low as everyone here seems to think.

FSR2.0 actually breaks apart pretty badly when it comes to distance model transparency and animation. Digital Foundry took a closer look at it while back and yeah, wasn’t pretty.

Still better then FSR1 but its no DLSS2.

1 Like

There were a couple of sections that initially I thought looked better, but on further inspection actually FSR was over sharpening them. It was producing artefacting that wasn’t present on the DLSS or native versions.

Having said that the “4K native” presentation of that game looked atrocious. The TAA implementation was like a thick layer of vaseline, it lost all of the detail definition that would be the entire point of playing at 4K at all. I really wish the comparison had been make with 4K native without AA at all rather than this because realistically when I play at 4K (and I do) I usually don’t bother with postprocessing antialiasing if I have a choice. I’ll enable MSAA if I can because it still looks the best (outside of actual supersampling), and because if a game is old enough to have MSAA then it’s going to be a light enough workload that MSAA isn’t overly demanding.

Generally TAA looks ok once it has CAS level 3 or above applied too it. I think CAS at 3 strength is pretty good for most sharpening passes.

Playing KCD atm with TAA and CAS, looks decent. Interesting enough if I enable TAA and SMAA it has this ghosting effect but TAA by itself seems fine.

I hear CMAA2 is good but no idea what game uses it. (Turns out only if games implement it like codemaster games. Shader FX method aren’t great)

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.