GPU Wars: Enter Maxwell, Nvidia's Successor To Kepler

Another example:

Advanced Anti-Aliasing features for MSAA/EQAA optimizations

Is that not graphics side?

all while realizing that the demo as a whole is only alpha

Am i the only one that said "HOLY FUCK" Several times while reading this?

 

Yes. That is a graphics side thing. However, in terms of what I've seen and heard from those who have more hardware and software experience in the field, they say that the performance increase isn't that big in graphics, and that it's mostly going to help parallelize the CPU load, rather than being a big deal in the graphics department.

Although no longer being tied to DirectX also means we may see an API and hardware that are designed for performance, rather than having hardware designed for DirectX, that's a benefit more in the long-term than the short-term. The other advantage for Mantle is freedom from Windows, meaning Linux will be able to enjoy AAA titles and ports will only be for the Windows-side code, rather than the graphics API-side code. This means there will be a whole lot less rewriting of graphics engine code for each OS, and the optimizations will happen for both at the same time, meaning less money spent on development, which allows for either cheaper or more profitable, or possibly more content in said games. It also allows much less code maintenance, meaning bugs gets fixed much more easily throughout both OSes, rather than just one.

Anti Aliasing optimizations are nice, but that's not too much of why Mantle exists. It's here for many reasons, but graphics improvement isn't among the top of those. Making the life of programmers and game devs easier, spreading the CPU load over more cores more evenly, decreasing code size, freedom from MS Windows... those are the big things that Mantle offers. Graphics improvements are more of the cherry on the sundae. Sure, a sundae is nice, but I'd rather have a whole sundae than just the cherry on top.

yes.

Disagreed. A GPU often is highly dependent on the CPU. But now, we'll see more GPU independence.

Developers may not even have to do anything, really. If the CPU has enough computational horsepower in it, we might see it do the job taking care of finding what needs to be done on it's own, and managing all that via software it has stored onboard in read-only non-volatile RAM.

That could allow the GPU to do a lot of things on it's own, and may eventually lead towards a pure-Nvidia system. Think of a NUC that uses an Nvidia ARM processor and Nvidia graphics solution, so it's almost like the 8-core PS4 or X-Bone processor, but as an Nvidia solution. That could be a great home console alternative. Imagine a home console that has the horsepower of a GTX 780, plus 8 ARM 64-bit cores running at 2.4Ghz. That's what we might see in the future.

Although this might be somewhat obscure, Nvidia at one point did produce chipsets for AMD systems featuring onboard graphics (for their motherboards). AMD eventually stopped allowing this. But it's a reminder that the Nvidia IP (Intellectual Property) extends beyond just graphics cards. They have ARM IP, and chipset IP. And to think they plan on letting their IP just sit, and neither license it or use it in some product they have in the works doesn't make much business sense. And knowing Nvidia, they've shown all too well they've got plenty of "business smarts", perhaps too much for their own good.

Next time don't say that Mantle is not going to optimize the graphics side of things, because it does. Don't judge API from one game and rumors, because could state as fact which is not evidently true and I will have to come and correct you.

We all want that. But it's not going to happen overnight.

We need monitor manufacturers to pitch in. AMD is leading the charge to make this free. And over time, if we incorporate eDP (embedded DisplayPort) technology into out monitors, we'll be ablt o use VBLANK without scaler chips. However, we'll need neither new cabling technologies for that, or new standards, or new scaler chips, or new monitors...

But this won't happen overnight, sorry to say. We need companies to come up with an open standard. AMD has always lead the Open charge in the corporate world, bringing in new technologies for all companies to use, including the x64 architecture (which Intel pays no royalties, even though it's in all their products).

AMD wants to help, but they can't do it alone. So the best way to get this fixed isn't to blame Nvidia, it's to encourage other companies (like Samsung, which makes monitor panels) to come up with solutions. Tweet at Samsung, tell them you want something to happen, tell them you'll buy monitors from them if they include this feature. So that way they might do something about it.

If you demand a product and are willing to pay for it, companies will pour money into research and come up with a product to take the money you'll willing to spend on it. And that's where consumers can help. Give companies feedback, and tell them to adopt AMD's free standards, that way we can all enjoy the fruits of this without paying for premium-priced products that don't give back much value.

I whole heatedly agree.

OK. I'll be more exact in my wording. Next time I'll mention "performance benefits using Mantle are rumored to be very small on the purely graphical side of things, but it can offer some benefits in some limited scenarios on the purely graphics side - but it's real beauty comes from the wide range of other benefits it offers".

No problem. Also, great summary by the way.

Also worth noting is that higher voltages cause electron leakage. That's because voltage is the rough equivalent to water pressure, meaning that as transistor density goes up and so does voltage using standard copper and silicon-based technologies, we end up creating more heat. This could be resolved (possibly) using Graphene, Stanene, and other materials, but we're not there yet. (The idea of analog computers, quantum computers, plasmon computers, and optical computers is also intriguing, but we're still a far way from those technologies being commercially viable right now.)

The problem with the R9 290(X) is that it has high transistor density, but it's voltage is too high for said transistor density. Nvidia took a different approach, and ti paid off for them handsomely. What AMD should realize is that decreasing transistor density isn't a bad idea, to decrease heat output. A bigger die can cost more, but it's also more reliable, cooler, and thus runs quieter.

Although we might have better coolers in the future, air cooling is pretty good, enough so that even a Hyper 212 EVO with two Noctua fans can compete in thermals (in most cases) with AIO water coolers. Maybe we'll have more water cooling GPUs in the future, maybe not, but I think air cooling is still pretty cheap and effective and will still be around for quite some time (unless we come up with something cheaper, more efficient and easier to design and implement).

In future might be but in 800 series that small cpu wont be able to drive GPU. Nvidia is hiding something proprietary. Also imagine AMD apu together with Nvidia apu :D 

Thanks for that.

I do agree amd does need to wise up a little bit in the hardware part of things. we know how powerful the 290(non-x) is and it launched at $400. we all though this was the game changer card. and it was for 2 weeks until they jacked the prices up. just think if they invested half of that price increase into making a larger die with lower heat output and the same performance/potentially better performance.

I will hand it to Nvidia. The GK110 core is PHENOMENAL all around.

I'm excited for graphene coolers or at least a plate that good in between the die and the actual cooler. Wasn't there an article about a graphene when in certain scenarios an implementations can transfer heat one was much MUCh easier than the other way? - TMK this will help in the fight again thermal dynamics

Yup air cooling is the most simple and easiest to implement. and it does a pretty darn good job. (my FX6350 @ 4.7 never sees 40C on realworld load)

Well, Nvidia APU and AMD APU? What? I don't think we'll see two completely different CPUs in any mainstream (thus, not experimental or tech demo) product come to market anytime soon. (I've seen demos and stories about it done in the past, but never as a real-world product.)

The GTX 800-series probably wouldn't have a CPU able to be a replacement for a desktop CPU. But the ARM processor might have enough so that brief hiccups from the CPU might not be a big deal to GPU performance. It probably is there more to take care of managing which memory goes where, how it's allocated, what memory should be cached in RAM or VRAM (depending on use, frequency of access, etc). So I think the whole ARM processor on a GPU is more of Nvidia's way of responding to AMD's HSA (Heterogeneous System Architecture), so that way Nvidia can work to make GPU memory and main system memory (RAM) work together as one.

(As for Nvidia hiding something proprietary... they're always doing that. You could say that at any point in Nvidia's history for the past 5 years, and you'd be right. Regarding the ARM processor, of course they've got something hidden, but it's not like they're just going to tell us before they create a product and patent the h*ll out of their idea to stifle innovation unless other companies are willing to pay extortionary amounts of cash, lest they get sued for multi-million dollar lawsuits a la Apple Vs. Samsung.)

 It probably is there more to take care of managing which memory goes where, how it's allocated, what memory should be cached in RAM or VRAM

I wonder if this will have anything to do with Volta's stacked Dram. I hope it is. as i would hate to see Volta launch on a v1.0 of such an implementation if you get what I'm saying.

Thanks.

Sadly, graphene can't be used for coolers in a traditional sense, because at least in pure form, it's fairly unstable (unless you dope it or have certain molecules at the end to keep it from reacting to compounds in the air).

I think that's where we need new materials. Graphene is nice, but I think if we can use it to replace copper inside CPU dies, it'll be better than if we use it for coolers. Heat transfer is always good, but I have a different idea on how that could be done.

One idea I've been thinking is gold-plating. Gold is well-known for being able to transfer heat well (it's third, right behind silver and copper), in terms of elements. Yet it's very difficult to get it to react, meaning oxidation isn't much of a risk as compared to pure silver or pure copper. That could be a great thing to be an inside, nanometer (or even micrometer) electroplated coating inside waterblocks or on the outside fins of heatsinks for CPU coolers and GPU coolers. It also gives great aesthetics: imagine a gold-plated aluminum fin heatsink, in an AMD channel motherboard (I'm looking at you, Z87-WS), for a black and gold theme all around.

We could get some really nice-looking heatsinks that will never oxidize, and would still have much better performance. Also worth noting, heat transfer happens on the part of a material where solid touches air. Thus, having a good material to transfer the heat there matters a lot to get heat out of a heatsink. Since gold plating can be so thin, we could use it in coolers, since the amount of gold needed would be only a few cents at most.

Also worth noting is that, besides aesthetic improvements, the whole heatsink could be gold-plated except the copper bottom-plate that mounts on top of the CPU.

In terms of thermal paste, I think if we could see Stanene (tin in a two-dimensional form, much like Graphene is with carbon), we could use that in a paste to help increase thermal conductivity. And given how much surface area a bunch of two-dimensional layers would have, the thermal conductivity would be really amazing. I also recently read something about a three-dimensional topological insulator (that's what Graphene and Stanene are), but I don't remember what the material was or where the article is. But it would be wonderful inside computers, to make transistors, assuming we can make some that have the right properties at room temperature and that are chemically stable with the elements those components are typically exposed to. =)

Well, maybe. Stacked DRAM could lead to greater RAM capacities and bandwidth, yes. Because if you can stack 12 layers of RAM on top of each other without too much of a heat output from that, you can increase the amount of VRAM on said chip and the bandwidth as well.

And that's all nice and dandy, but the ARM processor to control it might be their first to control that, yes. But another, possibly more interesting idea here is this: imagine if DRAM and GPU are all in one place? So that you no longer have GDDR5 and DDR3 separately, but you'd end up having GDDR5 on your GPU, and you CPU accesses it as well, using the PCI-E 3.0 x16 lanes to facilitate the memory transfer.

You could end up with better-than-GDDR5 memory speeds, and capacities, and it would be one less component to upgrade. You'd no longer need an onboard memory controller like in Ivy Bridge or Haswell, you'd have a GPU that takes care of that for you.

And what you mentioned about Maxwell ARM processor for memory management of stacked DRAM in Volta is nothing short of insightful and brilliant. The 28nm GTX 750 Ti for testing Maxwell architecture, and using Maxwell ARM processor for memory management before it's sent on it's merry way to manage higher amounts of VRAM at higher speeds when we get to stacked DRAM is what Nvidia seems to be doing.

Use a lower-end product to get your fingers wet and see if it works. If customer trials work out, then roll it out. Otherwise, don't tarnish the company name on a product that isn't ready, or disable a feature that isn't ready to be rolled out until the bugs get ironed out first. Good call. =)

It does sound nice. on paper.

on the topic of gold fins...

I'm no expert on thermaldynamics but I'd like to think i know more than the average joe about this stuff. I recall reading somewhere that the reason aluminum is ussed so commonly in laptop heatsinks is because of it great ability to transfer heat to the air. Copper transfer heat within itself very well. This is why heatpipes exist. the normal rating on thermal conductivity is i think for thermal energy along itself and not to air. to which i believe aluminum is better at than copper. How does gold hold up? is the the same story? if that one is actually true.

you mention surface area. This is one of the reasons i think my cooler is so good. the Xigmatek longassnamecooler2 has a textured coating on it that allows for more surface area and thermal dissipation.

as for graphene not working as a TIM.... maybe? there is a way to use it under the lid of a die against the chip so it can't oxidise. all while being surrounded by a non-conductice material that could also aid in heat? I don't know... This topic really interests me.

 

As much as i want faster cpu ram I'm not one for it. CPUs are doing EVERYTHING in the system (you know this I'm just explaining it to other outside) so they need responsiveness. they're dodging left and right making fast decisions with a couple of clock cycles. I don't think that cpus need the bandwidth of say Gddr5 with the added latency it adds. being that Gddr5 is perfect for video: High bandwidth low reactions times (comparatively speaking)

^^ and yes it is highly dependent on the program running. some need more bandwidth than others and other excelling with lesser latencies

I may be wrong. I want to dd4 to prove me wrong on this. let technology advance for the better.

Use a lower-end product to get your fingers wet and see if it works. If customer trials work out, then roll it out. Otherwise, don't tarnish the company name on a product that isn't ready, or disable a feature that isn't ready to be rolled out until the bugs get ironed out first. Good call. =)

Yep! The only thing I'd be sour on is the naming of said cards and their respective core configs vs generation. Oh God not again.... points fingers at AMD lol. but it's only a name and I guess I shouldn't care TOO much about it.

http://www.engineeringtoolbox.com/thermal-conductivity-d_429.html

Aluminum has around 205 in this chart. Gold is about 310. Copper is 401. And Silver is 429. However, silver reacts with sulphur, which is common in the air of cities, and why it tarnishes (turning dark). This is why I don't think silver would work (at least in pure elemental form). Gold has better conductivity than aluminum, but is actually less reactive than aluminum, plus it looks very nice. If we could get aluminum to increase it's surface area (by roughing up the surface, and then using a very thin coating of gold), we might see slight heat dissipation increases.

You can also check Wikipedia's chart here:

https://en.wikipedia.org/wiki/List_of_thermal_conductivities

In their chart, diamonds, carbon nanotubes, and graphene would work really well. (Don't expect to find Helium II being used anytime soon in your heatsink. That's another term for liquid helium, and unless you trip over pots of gold in the morning and have to wade through seas of 100-dollar bills to get out of the house in the morning, it's not likely you could afford to cool your system with liquid helium. Maybe LN2, but not liquid helium.)

More info on heatsinks here (from Wikipedia):

https://en.wikipedia.org/wiki/Heat_sink