CPU Size

Hello,

I was wondering why do processor manufacturers, try to make the smallest die size possible? Why not make the CPU bigger which would make for much faster processors and easier cooling. It's not like we dont have extra space in a desktop case anyways. 

Thanks, baddogg1231

It's much more economical to do something with fewer resources. Making smaller CPUs is just more efficient. Sure, it produces more heat. It's a very small trade-off, all things considered.

to reduce the cost of manufacturing. 

I thought smaller size meant it was slightly faster as well?

Bigger cpu: more room to work with. Ergo faster cpu's.

Smaller cpu: less room to work with but smaller and uses less resources to make.

Not to mention the max tdp

It's the number of transistors that make it faster. Fitting a greater number of smaller transistors in any given space. I guess that's what OP was asking. Bigger CPU = more transistors.

Edit: And I don't remember my school phsysics classes, but does the smaller manufacturing process offer less resistance?

So then, a smaller manufacturing process is much more efficient than simply adding more transistors. Both in a computational/physics and business context

Very easy answer: fabrication of a single silicon plate (wafer) costs a fixed amount of money. The more processors you can cut out of that single silicon plate, the more profit you can make. The bigger the processors, the less processors you can cut out of a single silicon plate. AMD processors are much bigger than Intel processors, yet they are sold for a lower price, just like AMD GPUs are much bigger than nVidia GPUs, therefore Intel and nVidia have much more profit on its chips than AMD, and can invest more in smaller lithography technologies, which make the processors even smaller, etc, etc....

Larger processor lithos will actually be able to reach higher clock speeds, a longer lifespan, higher operational reliability, and less quantum-sideeffects, at the expense of using more power. Modern processors compete with extended functionality and branch prediction and caching to make the performance better, not that much with clock speed anymore. For many professional uses of computing cores, these "performance tricks" often don't work out so well as for the consumer market, thus processors for professional heavy duty, often have a bigger lithography, notwithstanding the fact that they also often have more processing cores on the CPU or GPU. So they should technically cost many times more, but they do not, in relation to the size of the silicon, from Intel and nVidia they actually often cost less than consumer grade CPUs and GPUs, so it's mainly the consumers that are paying for the research Intel and nVidia do, and yet the consumers have the lowest interest in more performance, because the consumers' desire to upgrade hardware is obviously decreasing enormously, which means that their old tech systems are still performing more than what they need.

So basically, less resources = lower cost, but is that cost:resource ratio really that big? I mean a few nanometers can't cause a huge price drop, can it?

No it's not per se. RISC is inherently faster than CISC, yet it only has a fraction of the number of transistors of CISC. Modern CISC CPU's have so many transistors on several levels, that the whole litho is getting really messy, and that there are a lot of quantum-sideeffects. If they continue, most of the processing power of the chip will have to be used to correct the errors that occur in the chip due to quantum-sideeffects. Once they go under 8-9 nm lithos, the number of errors will rise exponentially.

I understood that, but have no response. There's the obvious question about all manner of directions this can lead. Is RISC sufficient for the average consumer hardware utilisation? I'm under the impression that most of our systems are CISC, they were adding additional instruction sets to some Haswell chips, like TSX. And TSX is supposed to increase the efficiency of certain multi-threaded tasks.

Yes, RISC chips are everywhere already (ARM, PowerPC, well... in short anything but AMD and Intel x86 CPU's lol) and they work extremely well and cost very little in comparison to CISC.

For instance, an IBM PowerPC 8 chip has 4 thread per core, and a whole lot less transistors, plus RISC chips bottleneck a lot less: they have multiple very low stage count pipeline that are process dependant, the arithmetic and logic instructions have no access to the memory, they have separate floating point registers. Although they are called RISC, they actually have more than a couple of hundreds instructions.

CISC processors:Intel Haswell has a 14-stage pipeline (not the record, that would be the Pentium D with 21 stage pipeline, which is part of why that chip was just a huge disaster), but the tricks Intel uses to make the processor faster, always comes down to the same: a longer pipeline, until it doesn't work anymore, and they have to upgrade the core design anyway. IvyBridge has an 11-stage pipeline, Haswell now 14,... also arithmetic and logic instructions have memory access in CISC chips, and although there now are separate FP registers, they are not parallel pipelining like in RISC chips. Also the maximum number of threads per core is 2, and that already causes congestion issues, and the architecture doesn't scale as well economically: adding cores to a RISC chip is easy, adding cores to a CISC chips makes costs go up exponentially.

The biggest reason why RISC never broke through on desktops is because the market was locked with operating systems that just don't scale well, DOS and Windows NT. That's not an issue anymore with GNU/Linux, and you can just add cheap RISC processors and they will automatically scale well. That changes the entire CPU landscape. For the relatively meager sales of consumer CPU's that are being realized nowadays, CPU fabrication has become really expensive, not just for AMD, but also for Intel. RISC is an obvious solution to the problem, because it van drastically reduce the parts count, and you quickly adapt the RISC architecture to whatever kind of chips you have a buyer for, whether these are server chips, phone chips, NAS chips, etc...

Tianhe uses CISC processors, and costs many times more than Titan, even though it's only twice as fast. That's just another example of the usefulness of CISC scalability dropping exponentially when the scale increases. The most efficient HPCs for the moment are by far the IBM Blue series, with PowerPC 8 chips. They are more functional than Titan, and are also very power efficient, and scale just as well.

The death of CISC has been announced so many times, but the Microsoft-handicap hasn't been cured yet, so for the next couple of years, CISC will still be prevalent on the Windows-platform. I do think that GNU/Linux machines will slowly shift their focus to other, smaller, devices with RISC chips, as these grow more powerful. But on the traditional PC, RISC will never break through (it's not like it hasn't been tried before).

That explains a lot more about the popularity of Linux, too xD

Thanks, man.

Well I was oversimplifying quite a bit, it's not quite as black&white as silicon size and number of transistors. The basic idea is that a CPU architecture only becomes useful with market volume and application software. For instance, the most succesful implementation of RISC in terms of volume for the moment is by ARM licensed designs, and the volume of chips is so high that the price becomes really low, with chips only costing a few dollars to a few tens of dollars to make, because these designs fit so many applications, from embedded designs to a lot of portable devices. The PowerPC architecture is also quite popular, PowerPC being the most used architecture for system implementations, especially in big data applications, and PowerPC being very popular in the last decade because of the XBox 360, Microsoft's implementation of PowerPC, and because of Cell (joint venture with Sony's implementation of PowerPC). Both of these implementations for consoles are being replaced with CISC solutions now, without ever having been used to their full potential, because for instance, even though the Xenon (XBox PowerPC implementation) is actually a 3 core dual hyperthreading, so 6 thread CPU, the software for that platform was never really optimised completely to get the best performance out of the platform. The same with Cell. That again shows how hard it is to implement a RISC architecture succesfully, and also means that the number of PowerPC based CPU's is now drastically reduced, which will make them more expensive to produce and develop again, etc...

And that has been going on since the 80's basically. First there was mainly RISC, with RISC CPUs already being superscalar and simultaneously multithreading and pipelining in the 70's, but the breakthrough of the personal computer came with scalar non-multithreading CISC CPU's, and those made Intel big. At the same time, IBM developed superscalar out-of-order CPUs with integrated functional units already, at a time where for intel, you needed to buy a separate mathematical coprocessor (there was an empty slot on PC motherboards for 8086 and 80286 computers, and you could go out and buy yourself and 8087 or 80287 chip and plug it in that slot yourself, and it wasn't until the 80368dx that the mathematical coprocessor was integrated into the CPU, and it was not deemed necessary at that time, because for the 80368 and 80468 CPU's, you could get an SX version, which had no mathematical coprocessor. And that was enough at that time for personal computing, noone implemented RISC architecture). The race between CISC and RISC became relevant again when the Intel Pentium was a superscalar design, and interest in RISC development flared up, because a lot of advanced technologies were already there in RISC, whereas Intel was just beginning to implement them on CISC, but there was no software platform in personal computing that could leverage the benefits of those technologies with RISC CPUs. At the same time, Intel came up with the MMX extensions, which are an exponent of the vector SIMD design of the early seventies, and IBM and Motorola almost immediately started extending their instruction sets to keep up with MMX, and later SSE extensions. This meant that RISC and CISC designs grew closer together, with CISC implementing RISC technologies, and RISC implementing CISC instruction sets. With the two main personal computing platform both being CISC (Intel x86 and Motoral 68xxx), there was no place for RISC in personal computing until the nineties, where Intel was gradually implementing RISC CPU technologies into it's CISC processors, but these were faster and easier to implement on RISC because at that point, these technologies had been available on RISC platform for over a decade, and with the breakthrough of GNU/Linux, the software could be had cheaper also, so the platform became viable. When Jobs came back to Apple, and he brought Next with him, with software based on a UNIX clone platform, OpenBSD, the PowerPC was suddenly a viable alternative on the PC platform, and the RISC CPU's for that platform became cheaper because of the volume, but Apple was still pretty niche, so it was impossible to produce the Power PC G-series in such quantities that the price would drop far below the production cost of CISC processors, but it managed in driving the price of CISC CPU's down nonetheless, partly also because Intel also got competition from AMD. By the end of the nineties, Intel had succesfully succeeded in implementing functional units, pipelining, hyperthreading, and out-of-order computing into it's superscalar Pentium chips, and they had the volume, and Windows had become the prevalent platform during the Nineties, so RISC processors for personal computing were basically pushed out of the personal computing market. IBM had gambled against Microsoft and lost. They knew at the beginning of the 90's that they needed a new operating system that could leverage RISC-technologies where MS-DOS couldn't, and they made a deal with Microsoft that IBM would invent a new operating system and make the first two versions of it, which became OS/2 1.0 and 2.0, and that Microsoft would develop the third version. Because IBM lost it's market domination to IBM clones by then (Compaq, HP, and the advent of the cheap Asian clones), the price of PC CPU's dropped because of the competition between Intel and AMD, and Microsoft played a treatorous role in the agreement with IBM (that developed OS/2 3.0 and 4.0 also, Microsoft didn't develop those either), there was no interest for IBM anymore to push further development of OS/2, especially since GNU/Linux was lurking and the reception of OS/2 in the business market was only lukewarm, since GNU/Linux gave UNIX a new lease of life, and IBM couldn't push OS/2 as a universal operating system. So Microsoft took advantage of its contractual development cycle with IBM to turn OS/2 3.0 into Windows NT, using the operating system that IBM developed and tweaking it to look more like Windows Shell on MS-DOS, making RISC platform support secondary. And Microsoft succeeded where IBM had failed, because they had no hardware ball-and-chain like IBM did, they did away with the notion of software being an accessory to hardware, and made a new business model of mass sales of software without hardware, and just went along with the success of Intel and AMD as cheap mass produced CISC-platforms, without having to invest in expensive hardware development, and without even having had to invest in the development of the operating system itself, because IBM did that for them, leaving them with huge financial means to do extensive marketing, which obviously paid off. Meanwhile on the CISC side, AMD took Motorola's spot as competitor to Intel, which would end in Motorola spinning off it's CPU business into Freescale, which then almost immediately gave up the CISC platform and switched back to RISC development, which great success in embedded systems and developing the now almost ubiquitous PowerPC architecture based CPUs that control most modern cars (yes, most cars after the late 90's are powered by the Motorola/Freescale implementation of the PowerPC architecture, so PowerPC CPU technology is certainly not a big server only affair and has proven to be the number one CPU architecture in terms of reliability and availability, and has played a huge role in the evolution of safety, performance, comfort and reduction of energy consumption/environmental footprint of cars).

So as mentioned, the RISC architecture had a short comeback on the PC platform after Steve Jobs returned to Apple, but it was not a success, also because Intel and AMD kept pushing more and more RISC technologies into CISC designs, they had a far larger budget for R&D because they had the volume of the booming PC market, and the prevalent platform was Windows, which was unsuitable for getting good performance out of RISC platforms. Gradually, Microsoft dropped suppport of Windows NT for secondary platforms, making it even worse, and IBM made another really bad business decision, holding on to AIX and not immediately switching to GNU/Linux in the Nineties, because at that time, the shareholders were not sold on the idea of only selling hardware without also selling software licenses, and they still saw themselves as a competitor to Microsoft and Novell, which they were in the business arena, with applications like Lotus Notes, which was based on UNIX applications from the 70's, and was the leading business communications applications until well in the 21st century, and as it turns out, the very instrument of exacting revenge against Microsoft, because Microsoft has over the years invested huge amounts of money in developing Outlook/OneNote/SharePoint to be a worthy alternative to Lotus Notes, and when they finally succeeded, the NT platform (that they pretty much stole from IBM) it's developed for, is de facto end-of-life, and most business environments are switching to GNU/Linux, and Outlook is rapidly becoming less relevant, and only recently has made the switch to a 21st century cloud concept, but Microsoft hugely lacks know-how in that field, because they have been spending all their R&D ammo on 70's concept software until 2012.

Fast forward another decade, and almost all RISC-technologies that were developed in the 70's are now almost implemented in CISC architectures, superscalar design, simultaneous multithreading, pipelining, integration of functional units, out-of-order execution... and the focus of the arms race has shifted to energy consumption and platform efficiency. And in practice, almost half of all big data server infrastructure now is IBM PowerPC based, and PowerPC infrastructure shows an impressive record of reliability and availability, with a reliability score on an equal operating system being more than 4 times higher than an x86 CISC platform. That is a huge benefit in big data applications, and pretty much means that CISC is not an option when it really counts, when ANY amount of downtime costs huge amounts of money or when there are heavy liabilities at stake. IBM is also dropping AIX, and focusing on GNU/Linux, and is constantly working on adapting the PowerPC platform for higher GNU/Linux performance. They even have a "PowerLinux" line of products, based on the current POWER7/POWER7+ generation, that shows great power efficiency, about the same or even slightly better performance than x86-based solutions, lower cost, especially on the software side, and as mentioned before, several times less downtime in a GNU/Linux server environment. Intel has failed to break through in the RISC-world, Itanium is pretty much dead and was born dead, and they have switched strategy and are focusing on scalable CISC systems based on Intel Xeon Phi, but they haven't produced a real-world objectively long-term proven solution with that platform yet. nVidia on the other hand has come up with Kepler, which is also RISC-based, and the fact that Cray has succesfully implemented it, means that it's certainly a possible future competitor for IBM's Blue Gene. However, energy efficient as the Kepler might be, the absolute record of energy efficiency is solidly held by the IBM PowerPC platform, with almost 2100 MFLOPS per Watt, which is huge. Also, almost half of the performance top 500 in supercomputers are IBM systems, and in systems in general, IBM is growing in market share, and now has about half of the world's systems market, which is also huge.

As always, it takes a very long time for HPC technologies to trickle down to PC's, and there is no viable hardware scaling system for personal computing yet, and with the explosion of HPC performance, the question is if that will ever break though, because as data communication bandwidth increases, the need for more processing power in personal computing applications doesn't really rise very fast, but the need for HPC performance increases by the minute, and the first exoflop systems are already being designed. A big factor in that development is also the need for cybersurveillance and the new cyberwar arms race between the world's large power blocks, and the fact that states have an exponentially growing administration because of the political evolution. A few years from now, there will probably be several exoflop systems in operation, and although IBM's PowerPC platform is the obvious platform to make that happen, the first exoflop system will probably be built in China and will be based on Intel's Xeon Phi technology.

In less than two years, the petaflop record of supercomputers has more than doubled, from about 15 petaflops to over 33 petaflops, whereas in PC CPU's, the fastest CPU now hovers around 100 gigaflops, and the first teraflop single processor system by Intel is the Xeon Phi, so it exists, but is not meant for the PC platform, it's meant for Intel MIC only, and Intel doesn't even think of using that technology for GPUs. In GPU's, most current GPU solutions hover between 0.5 and 1.3 teraflops in double precision calculations, which means that the newest cards are also twice as fast as cards that came out 18 months ago, whereas for instance the fastest intel CPU now has just over 100 gigaflops of double precision performance, and an AMD Phenom II X4 955 has half of that at stock speed, but that's an almost 7 years old chip, that also uses about twice the energy of the Intel, so the evolution in PC CPUs is definitely slowing down (the AMD can be overclocked about 30%, whereas the Intel can maybe get a sustained real overclock - as it will throttle a lot more when pushed, overclocking doesn't give a 1:1 performance increase on Intel chips - of maybe 10-15 %, which means that the difference in computational performance is maybe 40 % over a span of 7 years, where it should be 600 % according to Moore's law, and whereas the AMD Phenom II X4 955 costs about 90 USD and the Intel costs about 1000 USD, so a third more computational performance for ten times the price, that's almost a 1000 % defficiency), whereas in GPU's (which can also be used efficiency for scalable solutions, and which are RISC-based), mainly since AMD reversed the trend of making bigger GPU processors with the first hugely efficient RV770 (Radeon HD4600), and nVidia immediately jumped up in the same direction, the evolution is still on track with Moore's Law, and prices from generation to generation remain pretty much constant.

So it's not all black and white, RISC processors have a lower potential cost because they have a lower transistor count and therefore need less silicon, as GPU's and embedded system/ARM architecture processors show, and they keep up with Moore's law, which is primarily based on the size of the silicon. But the economic factors, the applications, the operating circumstances, etc... play a huge role in the technological evolution, and it's much more than just the silicon size. One of the key elements in the less evident benefit of the RISC platform versus the CISC platform lies in the fact that since the nineties, RISC and CISC have been growing towards each other, with RISC extending their instructions to match Intel's instructions, and Intel and AMD integrating ever more RISC-technologies into their CISC designs. So IF the number of transistors and thus the silicon die ever becomes a determining factor, it will only do so in a more global design, taking into account more than just the CPU core as such. As the two designs grow closer to each other, scalability will be the technological equalizer, and a CISC-core will as such perform better than a RISC core, but the RISC core will be easier scalable. The big question will be which approach will turn out to be the most economical, more smaller cheaper simpler cores, or less larger complexer cores... the technological race just continues.

And as to the role of UNIX-like operating systems and the slow demise of the Windows platform and Microsoft applications, there is certainly a correlation with processor technology, in that sense that the at present prevalent consumer platform, doesn't need to keep up with Moore's law, because it's hit a functional ceiling, and it's not very likely that scalable operating systems, like GNU/Linux, will break through on the PC platform and revive the need for processor manufacturers to keep on track with Moore's law. People are now willing to pay over 300 USD for an Intel i7-4770 that has about a third more real computational performance than a 90 USD AMD Phenom II X4 955, so why would Intel invest in keeping on track with Moore's law, if consumers are willing to pay so much more money for so little more performance. That willingness to pay for little benefit is pretty much an exponent of the separation of software and hardware and the business model that Microsoft imposed on the consumer market. On the other hand, in the systems and supercomputer world, Intel struggles to provide solutions that can compete in terms of performance and price with RISC solution providers, like IBM or nVidia, and generally, the price per performance in HPC and systems is dropping enormously, and in the consumer market, there is the socalled "post-PC" evolution, that is pretty much a RISC-only affair, and Intel has to struggle to provide CISC-solutions that can compete with RISC solutions, not only because they make far less profit on a CISC chip, because it's more expensive to produce and the gain in performance in comparison with a competing RISC chip is not convincing, but also because the "post-PC" market is dominated by GNU/Linux, and whereas the scalability of RISC could never break through on the PC platform, because it was held back by a handicapped prevalent operating system, that is not an issue on the "post-PC" platform, where pretty much everything runs on either the linux kernel or iOS, which is based on the BSD kernel, also a UNIX clone. If GNU/Linux were to definitely break through on PC, that would definitely change the whole situation, and we'll all be soldering together Playstation 3 mobos like the US AirForce did to make their supercomputer (the USAF was a GNU/Linux pioneer, they even made an maintained their own GNU/Linux distro for years, and they have always been very interested in the security aspects, maybe that's part of why they chose the PowerPC/Cell-based Playstation 3 to build their supercomputer... it's definitely something to think about...). Imagine a PC coming out that allows for hardware scalability beyond the scope of Crossfire/SLI. AMD has taken a first step towards such a concept with the FM platform, which offers a centrally shared data bus infrastructure that is shared both by the CPU and a scalable GPU design, where the internal GPU of the CPU can be scaled up by adding an external GPU. Intel obviously has a huge potential with Xeon Phi technology also, so that might grow into a concept that will break though for the next generation of PC's, but certainly not if a handicapped platform like Windows NT stays prevalent, and Microsoft is only now starting to develop a new operating system, which means - with their closed source development model - that even when they steal as much open source technology as they can - which they will, just like Steve Jobs did with Next/OSX/iOS - they will not be able to keep up with open source development, and can only continue to drag down technological advancement on consumer platforms if they stay the prevalent supplier of consumer software.