What have nVidia actually been doing (rant thread about the people)

I really feel that people are complaining in very ill-informed manners about nVidia's releases recently. Yes, they have not released an actually new GPU since the Titan, and they have not shown any significant work on a new architecture.

 

Well, nVidia is a big company, they have a lot of money to throw around, and obviously it would not be wise of them to just stick it all in their pockets, take a look at where they are actually spending it.

Take a look at the three videos in this series and see what I am talking about. http://www.youtube.com/watch?v=odi-Uqp_870

 

I am really annoyed by all the complaining about nVidia not really doing anything, nVidia is not really pushing new GPU architectures, but rather they are spending their time seeing what they can do with their technology. Which arguably is as important as the GPUs themselves. Especially knowing that their GPU architecture is more than adequate for current games, it is only logical to try to do more on the GPUs rather than just keep making then making them faster with not very much more to run on them.

Really, think about it, games are going to start having lighting to the levels of realism that movies have, and game developers are going to be able to use themselves. Then, because of the computational complexity of these deveopments that can be implemented, both nVidia and AMD are going to have to step up their GPU architectures to keep up with the games.

 

So, no, nVidia have not stopped working, they are working on stuff to run on your overpriced graphics cards.

Stop whining and be happy :)

TL;DR: Look at the good things nVidia and AMD ARE doing, rather than what they are NOT doing

P.S. I am not sure if this is exactly the right place to post this, feel free to move it around the forums

You have to appreciate that Nvidia and AMD are geared very differently. While AMD's graphics cards are much more complex and much more expensive to make, they are improving their current architecture with drivers and API developments. This adds an incredible amount of value to the cards, at the expense of AMD. They aren't trying to look for excuses to sell new cards, when they can improve existing cards. AMD users truly benefit with these cards, because of the longevity they possess. Higher bandwidth, higher Vram in addition.

Whereas Nvidia releases the same architecture with next to no improvement, and their cards are inexpensive to make. So Nvidia is merely looking at ways to maximise their profits. And they are seriously lacking in technological advances, compared to AMD.

I am actually an Nvidia user. Up until now, it has been a pretty comfortable see-saw motion between the two companies. With next-gen games looming, AMD are once again showing their value. Improving their current architecture in astronomical proportions, instead of just trying to capitalise on marketing BS. "Buy this card, because it is new". That's their only answer to everything. Launch a new card.

you have to realise that amd and nvidia hold things so closely to their chest, they wait for the other to make a move before they release something. once 290X comes out I'm will to bet within a couple of days we'll start to see maxwell rumors.

mid to late 2014 for nvidia's new card apparently if its maxwell or not who knows. I know if i was going to buy a new GPU it would be AMD, simply because they offer more and look to actually be progressing, where as Nvidia right now seems to be slipping, cards that are just remakes with tweaks and drivers that are gradually getting worse not to mention the "scandals" where they so called offered companies bonus's/cash to slate AMD and stop using their cards.

If the rumor is true then what does that say about Nvidia, they need to pay people to promote their cards? as they have nothing to respond with yet to AMD's 290x? perhaps.

Where do you get that AMD cards are more complex and expensive to make? Not trying to argue, just curious.

I tried for the past 5 minutes to search for an article, and I cannot find one among the mass of reviews and Yahoo answers. I'll have to indicate my points as best I can.

Lower clocked AMD cards beat higher clocked Nvidia cards

More computing power for editing et cetera

More performance with higher resolutions.

Compatibility with advanced APIs.

All-in-all, it is a better gaming/productive solution. Indicating superior architecture, at a higher cost. It has been indicated by other forum members too. I will have to dig up some old forum posts for you, which will say a little more about the cards architecturally, and the expense of each unit.

 

The only way Nvidia competes is with proprietary nonsense, like PhysX. PhysX would run much more efficiently on a CPU, I am lead to believe. But it is purposely blocked. Nvidia have a habit of guarding things jealously, and not supporting Open Source standards. It hasn't worked in their favour. At this time, it appears that the only way Nvidia can keep up is to switch to GCN architecture, to take advantage of recent developments. They are falling really far behind AMD.

think i read the same article, Nvidia make cheaper cards but then charge alot for them, something to do with the manufacturing process they use, basically the finished product costs alot less than an AMD board for example.

You also have to couple that with the fact that over the last year Nvidia have just been using the same basic chip and slightly modifying it, again it cuts costs.

Now they might be doing this for maxwell, taking loads of money to rnd for it OR they may just be spending it on useless things, who knows :P

I believe Zoltan is the person to ask about this. Upon seeing the title of this thread, he will probably enjoy a good rant in here, anyway.

And yeah, using existing hardware requires fewer resources. Both sides are guilty of launching rebrands. But it appears Nvidia have a faster release schedule, which enables them to keep their prices artificially high. People pay top whack for the newest releases. That means high prices, low production cost, low R+D costs.

Another interesting observation is that Nvidia often hold their "true" flagship card in reserve. The launch of the Titan quickly stripped AMD's newly acquired dominate position, when they improved the 7000 series. It appears Nvidia are at it again, releasing some sort of improved 780 in these coming months. Even more interestingly(!), AMD appear to have learnt Nvidia's strategy, and this could be the reason for the delay in AMD's flagship card, the R9 290X. They suspect Nvidia was holding something back. There was talk of a "Titan Ultra" a long while ago. Who knows what will occur.

Seen alot of people saying "wait for the Titan Ultra" and i cant help but think it is such a stupid thing to say, i mean right now the 290x supposedly equals or beats the Titan at best and at worst fights it out with the 780.

So a titan ultra is only going to be a decent card with a massive price tag, i highly doubt nvidia would reduce the cost of the titan/780 by enough to make the ultra fit in, unless the ultra is going to be $1100+ or something.

I guess that's the point I was trying to make. AMD have totally upset Nvidia's price plan, and counter attacking capability. Though, I just read some indication that Nvidia will be launching its own holiday season games bundle. Which is pretty acceptable. They just need to answer Mantle effectively to stay in the game. They do have APIs in development.

yeah i saw the bundles they are offering, i had a quick glance but the overall reaction i saw from people was "meh" or "you got more with AMD's bundle".

Either way its something but is it a sign that perhaps Nvidia got caught sleeping by AMD? :D

yeah they really need to get an API out or atleast mentioned, AMD def got the jump on them with Mantle, even more so with it being supported by BF4 already.

(had to edit a few times and re read just as much haha, getting used to new keyboard, first mechanical ive owned and its pretty sweet)

Well, I had a hiatus from PC when I was at university. Upon building a new PC, I went with the 780. Most games I play are optimised for Nvidia, and I game at 1440p. All I can say is, now that I can afford to wait, my next GPU will probably be an AMD card.

Nvidia has been kicked in the balls. "Sleeping" is probably the appropriate word. I feel the GPU releases became predictable, and Nvidia were all too comfortable with the way things used to be. Now the whole PC market is undergoing some serious changes, they have been rather slow to react.

Admittedly, I disagree on most of what your claiming about who has the superior architecture in terms of raw power overall. think about it, if what you are saying is the case the 7950 would be eating 680/770's alive in most things. as on paper not accounting for arc's they close to identical core count, bus bandwidth etc. As well as your other points being core clocks witch is an unrealistic comparison to make and higher resolutions witch is only relevant to the overall performance of the card itself. as for API's amd have gained a potential edge in the pc space only recently, but that does not mean nvidia cards dont work with low level api's remember the ps3? that and i dont know if you watched the conference today but nvidia did hint at a low level api for pc in the works so yeah. as for physx no way in the 9 hells would it run better on a cpu, it's highly multi-thread work load orientated that benefits more from parallelism that just raw speed of a cpu, its just the nature of the beast. given there new flex extensions they plan on implementing it will actually make physx something worth talking about instead of it being just a meh thing at the moment. i will admit current physx workloads for games on gpu's are not that intensive at the moment but that more due to limited implementation more than anything. on the other hand of things im far from proud about alot of the business practices nvidia have done in the past and such and also there misleading marketing at time's where they have exaggerated the capabilities of there hardware. but on the whole no they are anything but falling behind AMD, they are leading the industry in many way's but AMD are aggressively getting on Nvidia's heels witch is good, real good.

As I stated, most of what I have said is indicating AMDs superior architecture. I cannot find the article related to the discussion, but I can give you a 90% guarantee that AMD have the better GPU architecture. There are many ways I can claim this. The fact that AMD has proven easier to develop for could enable me to describe their GPUs are superior. HOwever, the main point to my argument lies in the design of the GPU, and I cannot explain that design. I've failed to explain it, because I am not JJ, or somebody all-knowing. I will put it to the community, or have someone come over and answer that.

I'm aware of Nvidia's API. They have NVAPI, which I have stated above (and in other threads), but nothing on the scale of Mantle. The PS3 API is not applicable, because all consoles have a low level API. They have begun to rush a RedHat developed low level API to counter Mantle. Simply because they have nothing at this time, they are still reliant on DX11.

http://www.hardwareluxx.com/index.php/news/hardware/vgacards/28045-nvidia-red-hat-developing-graphics-api.html

CPU PhysX is better than GPU PhysX, it has been widely reported. Here is the related article:

 http://semiaccurate.com/2010/07/07/nvidia-purposefully-hobbles-physx-cpu/

ATi released HD 2900xt with 512 bit bus width in 2007. Nvidia at the time were throwing around 128 and 192 bit cards as high end. It was overkill and not that good but they got there first (single GPU keep in mind). Nvidia bought physx and made it a closed thing (thus few games support it), tried rebranding many gpus after 2008 and still made good sales because of their big name and good game "performance" achieved with sponsoring game devs and better driver support early on. They have fewer stream processors and the technology is actually older and not really better in terms of raw performance.

ATi/AMD on the other hand while they have started with weaker opengl performance than nvidia  and bad drivers early on which really hurt their brand name for gamers, and also missed DX11 launch with poor tessellation with hd 5000 series. Their cards have much more FLOPs than any equivalent nVidia so the performance isn't "artificial", it also increases over time with driver improvements.

i wasnt really saying that AMD have better architecture, i was saying that the production costs of said cards is generally cheaper, how they perform is a different matter.

Physx as the article says is only "crappy" on CPU's due to the coding Nvidia use and has nothing to do with parallelism. I does however remove extra load from a CPU but thats not really an excuse for it. 

Apparently the "780 ti" was just announced. I'm not sure if it's just a rumor or if there actually was an official announcement. Regardless, it isn't going to be priced and less than the 780 because it is "better" and the 290x is going to be significantly cheaper. This might just be more overpriced bs so that the extreme fanboys can still use the "We have the best single GPU card" attitude. 

errrr no, i made to the 680 vs 7950 to prove the point that nvidia's arc is better. as they both have just over 1500 micro cores, same bus and at same core clocks perform differently, the nvidia cards pulls ahead in most benches because of it's architecture. it's why the 680 trades blows with 7970 that has just over 2000 micro-cores because of it's architecture. 

as for physx http://www.realworldtech.com/physx87/2/ this article is based upon the article you have just linked me linked me witch is also linked the one you linked me. under physx profiling it is stated that the gpu has 2x advantage over the cpu and not the 4x compute advantage that nvidia was claiming, but still the gpu is better for the job for said reasons i stated above and verified through said article.

as for api's pointed out the ps3 because you presented your point as if nvidia could not do low-level hardware access witch is not the case. as for who's api is superior in the pc space nothing is set in stone as of yet, no real test have been done therefore no real conclusions can be drawn yet. so no one can really claim superiority or say certain party has done a shabby job.

I've seen the 7950 beat those cards quite capably, in truth. In both gaming and productivity. Especially productivity. Only marginally in gaming.

If PhysX runs better on a GPU, then why does Nvidia purposely block PhysX from working on the CPU, using open source? Because of shady business practices, very typical of Nvidia. By supporting OpenCL, they would negate the PhysX "advantage" they have over AMD, and Nvidia's marketing BS that goes alongside that. Fact remains, Nvidia has probable cause to sabotage OpenCL PhysX.

From the article you posted (page 5): After reading the entirety of the last page, the article completely throws out your point. It goes as far as saying GPUs have the performance edge "because Nvidia wants PhysX to be an exclusive".

While as a buyer it may be frustrating to see PhysX hobbled on the CPU, it should not be surprising. Nvidia has no obligation to optimize for their competitor’s products.

Which actually correlates with everything I have said thus far. Though, I will concede, it is good business sense to block CPU PhysX.

I never said they couldn't produce a low level API. But they are having to rush one, and as things stand, AMD have proven easier to develop for, given their support of open standards. Making their architecture much more favourable to developers. The reason your PS3 point was not applicable, is because console hardware is uniform. It is always going to be low level. I agree that nothing is set in stone, and I hope that Nvidia come up with a competing low level API.

The strength of AMD's card is in the design, I can promise you that. By the way, I am running a 780. Not being that kind of bias.

I disagree, AMD and nVidia appear to have optimized their cards differently (directly comparing Kepler and Southern Islands). AMD has larger memory busses to drive higher resolutions more reliably, their cards are somewhat fatter (larger) and slower clocked than nVidias chips. nVidia appeared to be more focussed on a high clock chip, possibly to cut on Silicon usage, and possibly other production costs.

Compare the 7970 and 680.

Cores: 2048 vs 1536 (33.3% more on 7970)

Core clock: 925 vs 1058 (boost) (GTX 680 ~14% faster clocked)

Memory bus: 384-bit vs 256-bit (50% more on 7970)

Memory clock (effective): 5.5GHz vs 6GHz (GTX 680 ~9% faster)

Memory bandwidth: 264GB/s vs 192GB/s (7970 ~37.5% faster)

As far as raw performance (GFLOPS) the 7970 beats the 680 by just over 10%, but in certain games it fell behind at lower resolutions. I can only assume that the difference here is caused by a) drivers and b) the architecture itself, possibly by the ROP count being the same, but faster clocked on the 680. At higher resolutions the advantages of the 384-bit over the 256-bit memory bus becomes apparent, the 7970 being able to handle large amounts of VRAM more effectively.

Overall I would consider the cards pretty much even in performance, trading blows with each other in different games at different settings.

But, the nVidia card has a consistently lower power consumption for similar performance, uses less silicon, and has a generally more meager subsystem as far as memory etc. are concerned, so actually I would consider the GTX 680 a more efficient card overall than the 7970, as it uses less to get pretty much the same performance at 1920x1080.

 

As far as Physx goes, although I don't agree completely with it being proprietary, but I do believe that running it on a GPU is better latency wise, and possibly better as far as general speed of the operations goes. Although you are cutting into your graphics horsepower, you are able to run it off the VRAM, and also take advantage of the massive paralellization the GPU has to offer with its 1.5k cores compared to 4 cores of an i5. It would really come down to the individual calculations, you would have to look at them on an individual basis and decide which you can parallellize and which you are better off processing serially on the CPU. Another advantage could be that you can do the physics calculations as part of the graphics calculations without having to exchange a lot of information between the GPU and CPU. Also, as the information of the scene (before rendering) is stored on the GPU, it would be best to edit it using the GPU, so you don't need to transfer the information from the GPU VRAM to the CPU and back.

Note that there are obviously a lot of specifics I don't know about, as usually the optimization of these systems really will come down to trial and error, and I have not analyzed the process completely. But, as I said, there are very legitimate advantages to doing physics calculations on a GPU rather than a CPU.

One of the things that struck me about what nVidia is doing with the proprietary software is that they are integrating them very deeply into the game engines themselves, which actually is somewhat of a different approach to the problem than AMD have. I feel that both are legitimate, but AMDs approach allows the game developer more freedom, at the expense of an increased (EDIT: let me restate that: HUGELY INCREASED) workload on their (the game devs) part, while the approach nVidia has allows the developer to save a lot of time when making the game, and allows nVidia to optimize the process at any later point, of course at the expense of more work having to be done on nVidias part.