Remember the benchmarks that showed almost no improvment for Nvidia when using DirectX 12?
According to this article at techpowerup, the Maxwell architecture doesn't support asynchronous compute, one of the major features of Directx12, even though the drivers tells windows that the card is capable of it:
Is this a problem with Nvidia drivers (from the techpowerup article I gathered that this is based on the Ashes of Singularity devs' opinions), which can be remedied by a driver update? Is this really true and Maxwell wasn't designed with Directx12 in mind, so much so that it's missing one of the more important features?
Judging from Nvidia's reaction, and the fact that they lied before about specs with the 970, I am inclined to say that this is a hardware problem, that won't go away with drivers.
So... 970 issues - Nvidia is not guilty... Law suits against Samsung and Qualcom for questionable stuff - Nvidia is not guilty. Nvidia games - Batman, Assassins Creed, etc... No, its the developer, Nvidia is not guilty. Nvidia falls advertising Batman - Nvidia is not guilty... Nvidia using non G-Sync technology on G-Sync branded laptops.. Nvidia still not guilty. Now we had DX12 test, granted its just one DX12 test, but its DX12 non the less... Nvidia sucks big fat coc*sicle on it, and how do they react? The blame the devs. So the devs are guilty, not Nvidia. Now we have shitty DX12 support, and my guess is Nvidia will not be guilty again. And we all tolerate this, don't we? Nvidia market share is growing, they are doing shitty and shady stuff, and the SAD part is, people are OK with it...
I was already aware of this. Nvidia shouldn't be complaining about this problem... For a few reasons really.
DX12 has Mantle features at the back end, of the API. You had every chance in the world to support Mantle, but choose not to. Cause Nvidia doesn't like anything that is "Open".
It's actually an architecture problem.. NOT a driver problem. Nvidia cards aren't good at Computational processing.. It's something AMD has thrived on for years.. (why do you think during the people went through hell, and earth to pick up 290X cards during the Bitcoin Mining craze?) Cause it computes things better.
I've still feel like Maxwell was a failure of a launch, there were WAAAAAY too many issues on an architectural level.
Overall Nvidia is just butthurt that their cards aren't doing well for the long term. Hopefully it gets fixed with Pascal.
All this really means is that devs wont probably use asynchronous compute (unless the work is done for them)... ashes is a special case due to mantle history.
Majority market share means that the gaming world now moves at nvidia's pace, history (and common sense) shows us that devs publish for the many, not the few.
The following is me being down / pessimistic
I am sure that once async is properly in place in their architecture (implemented slightly differently in a way to hobble amd performance and money paid to devs to use their kool aid 'gameworks' version) then we will start seeing it used.
I have been saying this for a long time. Regardless of truth or business practice or how your hardware will hold up or even be downgraded in performance over the years, I am the AMD fanboy somehow.
I have given up warning idiots about nVidia. I just let them spend their money it either works out or it does not for them. Whatever, not worth my energy trying to help people out when they want no part of it.
It is crazy though and people paying to be treated this way? And tech youtubers keep recommending nvidia, knowing this that is like telling someone to waste money. AND NONE OF THEM BRING IT UP. They are supposed to be at least unbias, and one would expect that they might even be looking out for their viewers, nope. It is looking at this stage that either everyone is being paid off or defending nVidia for free with some other gain somewhere.
When the 970 Vram stuff popped up, all big tech youtubers jumped to defend Nvidia, how Vram doesn't matter, how this is ok, how despite lying to their customers Nvidia is still innocent. I am not a fanboy, ok, i cant go on with a straight face, i am a fanboy, but something made me fanboy. May be the reason, that still, the most powerful graphics card is AMD based, may be because the ONLY Nvidia GPU without better AMD alternative is actually 980Ti, because even the new 950 reviews of 180$ models are against 150$ R7 370 and not against 180$ R9 380, may be because of concole-like exclusivity of games like Arkham, AC, Project Cars (in that game 750Ti beats 390X), may be because i have a choice of upgrade right now, and my choice is 8 core AMD with overclock or Core i3 from Intel without overclock... May be, because it just so happened, that the Vram actually matters, may be, because i am tired of Nvidia's screw ups, and people defending them. Are they all so blinded, what? I don't get it...
8800 GTX runs over it's TDP and overstresses PSUs not built for it - nVidia is not guilty. Mobile GeForce 8000 series desoldering themselves - nVidia is not guilty. Fermi GF100 hot chips yield 1.8% success - nVidia is not guilty. GTX 480s catching fire - nVidia is not guilty. GTX 590s catching fire - nVidia is not guilty. GTX Titan has no drivers until 3 months after launch - nVidia is not guilty. Drivers that melted GTX 780s - nVidia is not guilty. GTX 960 beating the Titan Black and GTX 780 Ti because of nerfed drivers - nVidia is not guilty.
They've been pulling stupid stunts for over 10 fucking years, and yet everyone thinks they are amazing and reliable. Boggles the mind.
Well, tessellation is used primarily in increasing the vertices count of an object to make bends smoother. It can also allow a physics simulation to have more points to create a realistic dynamic movement.
Yes, Nvidia cards/drivers handle tessellation much better than AMD. Note how the 6870, 6970, and 7970 all perform worse than the 580 past the 11 subdivision mark? And worse than a 560Ti past the 14 subdivision mark? Why wouldn't Nvidia exchange some money to get a developer to use those levels of tessellation?
That is of course a VERY old benchmark. Things have improved since then, but nVidia does still have a little priority in tessellation. Interesting since AMD was first to support hardware accelerated tessellation.
I had 2 t61p thinkpads that were great to use and play on but both motherboards died because of the nvidia g86m chip issue. They got away with it because the laptops started dying just after the 1 year warranty had ended.
I had only nvidia cards up until now but I always recommend AMD. I'd probably buy a top end amd card even if it's 'slower' for my next upgrade.
Other than the laptops, I've had a good experience with my desktop cards (gf2 mx440, gf4 ti4200, 6100le, 9400gt, 8800gt, gts 250, gt 240, gtx 770 now).
We need to surface this info more and more and give tangible evidence for these points so that the fanboys eat their tongues.
It was actually the HD 3000 (RV600) series that introduced hardware tessellation. It had a dedicated Programmable Tessellator. It wasn't usable until the last few months of the HD4000 (RV700) series however since DX10 didn't support hardware tessellation until the end.
(If we want to get really technical, the Xbox 360 was first to have hardware tessellation in the Xenos chip ATI designed for it.)
You got all the good cards in there. Well, besides the 6100LE and 9400GT... Those cards were all the solid-performers that had matured hardware (albeit the 8800GT was less than mature seeing as it was G92, but G92 was a direct improvement of G80 so we'll still call it matured. :P) and held good price-points in the market. I love the 8800GT and still think it's one of the best designed and marketed GPUs ever. The GTS 250 as well was a price/performance powerhouse, as it's the previously overpriced and under-performing 9800GTX+ shrank into a smaller, more manageable price and package.
I used nVidia solely until 2009, at which point I was sick of the overpriced, overheated cards they were making and got an HD 4890 as they went on firesale. Been buying AMD for my personal use since. (Except that time I owned a GTX 690 in 2013, but come on that is an irresistible looking card.)
The fun part is, it probably won't be, unless you go with Fury X... When Nvidia and amd GPU's have the same performance I say go amd. Oh, but physx and dsr and this and that. And when you enable physx Nvidia GPU's performance drops with 20% and you have waaay slower card. And the visual effects are barely noticeable even with side by side comparison. We are barking here, but the truth is people are gonna ignore that sh... to, just like everything else, keep buying their stuff, and when WB and Rockstar release their next Nvidia exclusive game people will say again, look how much faster Nvidia is. Just like Linus, Joy's two cents and many many others. Many others. Oh yeah, and the review websites will benchmark with them, just like they do with GTA5, Batman and Project Cars.