Nvidia Volta Dead, Ampere coming but with issues

I am sourcing this from another forum but it should provide some interesting discussion. I can see that nvidia is going to honestly need to redesign its next node or self there will be many issues with refreshing. It seems from what I have gathered here that simply moving to a smaller node is more challenging with graphics cards then with CPUs.

I can see price seeing a bit of a hike on the cards. After all nvidia has not made very much profit with Volta at all and to skip it after researching it is quite a fiscal hit

Ampere is fix, Q2 2018 too. The first already start with the BoM.
Source: Igor Wallossek of Tom’s Hardware @ 3DCenter Forum
Obviously, Igor once again presented a new internal roadmap, in which now no longer “Gen. 4”, but just “Ampere” is - and now firmly for the second quarter of 2018. This means a significant transformation of nVidia’s plans, in which the Volta-based gaming chips GV102, GV104 and GV106 (as well as possibly others) were certainly once firmly planned - and will now be dropped in favor of the next generation. What this is, is unclear - conceivable would still be the one that has misjudged the manufacturing technology and the actually intended for Volta 10nm production is just too late for large graphics chips ready for the competition. In the case of the HPC chip GV100 , which was definitely to be brought in , it was necessary to resort to 12nm production, which, however, produced an unusually large chip even with 815mm² chip area. This extreme chip area may well be regarded as a certain indication that nVidia had actually planned the GV100 in 10nm production - because in this case the same chip would probably be ~ 500-550mm². And possibly the GV100 chip had to be slimmed down in its final form because of the limitations of the 12nm production even something, the non-fulfillment of some early performance promises points in this direction.

For the gaming chips in the 12nm production, the pressure is lower, because you (mostly) does not go into such areal border areas. But for a higher performance under the 12nm production always means a larger chip area, since the 12nm production in this regard has only small advantages over the 16nm production (according to TSMC 20% less area requirement at 10% Mehrtakt or 25% less power consumption compared to 16FF) . Either nVidia would be able to offer gaming chips in 12nm production so rather only below average performance gains - this could be realized with slightly larger chip area and about the same power consumption quite. Or nVidia would have to make gaming chips in the 12nm production significantly larger, which would then at least have a medium-high performance boost result - but this at a higher cost and above all to a higher power consumption. How such a thing can look like was already exhaustively explained in a previous speculation article .

A performance doubling or even the proximity of it is not achievable with the 12nm production - for this the graphics chips would have to be too big and too expensive, especially the GV102 / GA102 would then encounter production limits (the GV100 chip with its 815mm² could be made at all only by means of expensive special procedures) . But above all, an additional performance under the 12nm production is always accompanied by an additional power requirement - really fast 12nm chips would only be possible by doing away with nVidia’s first-class energy efficiency and (relatively) low power consumption. This should probably have been the sticking point, why nVidia has taken distance from this path then again: With the 12nm production for gaming chips, there is either only a small increase in performance, which would not have arrived in general - or nVidia would have to buy a large increase in performance with significant additional costs and the renunciation of the known energy efficiency, which would have been criticized as well.

Either way, the interim solution of 12nm production is not good to adapt to new graphics chips, was ultimately probably only for the GV100 chip a necessary evil to fulfill their own HPC plans or obligations in this regard. For new nVidia graphics chips after the Pascal generation you actually always needed a new real production fullnode - with everything else either the performance or the power consumption requirements would not be met. In addition, the lateness of a new nVidia graphics chip generation (the 18-month hitherto, and probably a total of ~ 24 months after the release of the current generation is comparatively long for nVidia) suggests that you are waiting for something to happen At a certain point in time is ripe - like the 10nm production for large graphics chips. For smartphone SoCs that is already in mass production since spring 2017, but usually the first months are blocked exclusively for large orders from Apple and Samung and subsequently the new production must first mature in such a way that produces the much larger graphics chips to a meaningful production yield can be. One year later than the first corresponding SoCs here is a rule of thumb, which has worked well in recent years - and now in the case of the 10nm production of TSMC fits well with the second quarter of 2018.

Long story short, due to these circumstances, we expect the ampere generation already in the 10nm production of TSMC. After this (for nVidia) long waiting phase between two generation of graphics cards as well as due to the availability of this production method, which was already predicted earlier in the spring / summer of 2018, everything else would really come as a surprise. Above all, the use of 12nm production in the spring / summer of 2018 would hardly make sense, as long as the 10nm production would soon be ready to go - nVidia (as explained above) can not achieve as much performance with 12nm production as you really wanted to bring. With the ampere generation in 10nm production, however, the generally usual performance boost of almost twice is possible. How exactly nVidia achieves this with the (supposed) HPC chip GA100 as well as the (probable) gaming chips GA102, GA104, GA106, GA107 & GA108, is still unknown and is therefore in the field of speculation . Conceivable for this is from larger architectural changes over pure more hardware units up to (still) higher clock rates still everything.

2 Likes

I have argued for a while now that the gpu shortage was not due to mining but rather due to development issues of new architectures and the reduction of production with expectation of a new architecture. People like to blame miners but the current issues with the market is due to Nvidia and AMD. Remember the GTX 1080 has existed since 2016.

2 Likes

While I am aware GTA 5 is 5 or 4 years (depending on who you ask) old now, I was not aware it has been 2 years since the 10-series launched.

Of course not, they have launched a new card every six hours since then.

5 Likes

new card my ass just re branding and upping specs( production yields less flaws in chips allowing full use of die space) and calling it new.

Yup, here are four examples: GTX1060.

Nuff said.

1 Like

Google’s using it’s own silicon for most of its ML needs; I wonder what others are doing and if market demand that gaming can produce can compare with the market demand the tech industry and ML can produce.

It could be nvidia is not even trying that hard to make better gaming chips.

Wasn’t this predicted by AdoredTV when he talked about Nvidia’s Titan V? I think he mentioned that the Titan V will be a rather bad card, because Nvidia will drop their support almost instantaneously.

1 Like

Sort of. He didn’t really go into the question why Volta would be uninteresting for gamers, though the reason is fairly obvious – volta’s main reason for being faster is that it’s insanely larger, which would’ve meant lower margins for poor Nv. So yeah, we’ll see how 10nm performs.

I actually think that there is RAM shortage on the market right now. However this is good after a crisis consumers always win.

With those prices of graphics cards someone may decide to enter the market will see what will happen in the next 5 years.

1 Like

While I would welcome the idea of a new competitor, I highly doubt it. Manufacturing chip fabs is rather expensive and developing GPUs is expensive as well, while the audience is small when compared to the mobile chip market. I’m also not sure what the GPU market’s current patent situation looks like. There might be difficulties involved as well. Intel might be able to produce GPUs, but I think they’d focus on the low-end sector.
But, I’d happily be proven wrong in this case :slight_smile:

do you think Imagination could fill this gap?

Do you mean in terms of new and superior technology? I believe that if somebody comes up with something groundbreaking new, his technology will be bought by Nvidia( , AMD), or Google.

well doesnt have to be superior, just GPUs for gamers, I thought they had some impressive tech, apple tried to buy them, but instead is going to design their own GPU now, and they’re also in raytrace cards , so why not , but idk, guess it isn’t that simple

All indicators point to Intel possible producing a discrete GPU in the next few years. They have already announce a 14nm prototype design .

I mean their cards have been bought so far because of alt coins and normies.

This would be why they hired Raja.

Not many people perhaps remember, but intel have tried to enter the discrete GPU market a couple of times before.

Both Larrabee: https://en.m.wikipedia.org/wiki/Larrabee_(microarchitecture)

And the old i740: https://en.m.wikipedia.org/wiki/Intel740

Both were trash.

Maybe third time’s a charm. But i doubt it. Intel can’t rely on their existing foothold in this market and it moves fast. The lumbering old behemoth is having enough trouble figuring out what to do after Sandy Bridge other than shrink process and fiddle around the edges. Never mind entering the GPU market.

Even Larrabee was essentially a whole heap of old Pentium P54 cores (in terms of design). lol.

It would not surprise me but at the same time I know why they will do it. Its to supplement intel graphics it wont be a massive market busting card. They need a good discrete GPU to help them where they are weak ecosystem wise. It would boost their OEM sales and compete with the Ryzen APUs

They hired Raja for a reason LMAO

They have plenty to do they just dont have a market for it … Computer science needs to catch up tbch. There is alot in all of our modern architectures that hasnt been taken advantage of THAT being said the next process improvements could be in intelligent or heuristical improvement of resource handling. AMD broke ground with that on ryzen.

The biggest problem in being able to move forward with the instruction set is maintaining backwards compatibility. If you ask me just like with OS there should be like a decade point where we just move on from certain older machines. Naturally everything that goes up or is born into this world must come down and die

Its not enough to undo billions of dollars in RnD

NVIDIA and AMD seem to be able to find a market for their discrete GPUs just fine.

Intel is kinda like Microsoft. They dominate the market they are entrenched in, but every time they try to get out of it, they suck.

Just like intel ate the lunch of the big iron manufacturers in the 90s and 00s from underneath, they’re going to have their lunch eaten by ARM.

Windows isn’t the bulk of the market any more. Sure, it might be the bulk of the desktop market, but smartphones and tablets are becoming “good enough” to do all the things most people want to do, which means no more dependence on intel for x86 compatibility.

They need to pull their finger out and get some competitive designs with ARM in the very low power space real soon. More than that, they need to beat ARM handily on power:performance and price otherwise none of the phone/tablet vendors are going to bother.

Microsoft dependency of the general population isn’t going to save them this time around, because Microsoft has totally missed the boat on mobile.