Return to

The Bleeding Edge [Semiconductor Foundry Thread]



Welcome to the semiconductor manufacturing thread.
Here we will discuss foundries, manufacturing technology and post news and info.

With the recent news of GloFo dropping 7nm we have three foundries left battling it out in the nanometer race; TSMC, Intel and Samsung.

The state of the foundries:


Intel is currently producing the bulk of their chips on their 14nm node, the third iteration of this, the 14nm++ is considered to be the highest performing node in current high volume production.
They are also manufacturing a small volume on their first Gen 10nm node, these chips are however considered to have lower performance than their 14nm++ node.
High volume production of their 10nm node is scheduled for Q4’19, according to some reports this is expected to be the 10nm+ process.

Here is a chart Intel themselves released comparing their 14nm to competing foundries.


Further reading on Intels 14nm process:


TSMC currently has two main high end nodes; the 10nm and 16nm (including 12nm FFN).
Their 10nm node is mostly used for smaller chips like smartphone SoCs, it is believed to be similar to or denser than Intels 14nm, but not suited for high volume production of larger chips.
Their 16nm process is what is used for producing high performance GPUs and CPUs, it has been reported that this process is similar to their previous 20nm node, but with finfets added.
TSMC is expected to be the first foundry to hit high volume production of the 7nm node, this will dethrone Intels long reign as the leading semiconductor Foundry, and it is speculated that this node is denser than Intels 10nm node.

Further reading on TSMC roadmaps

Beyond 7nm


Samsung semiconductor currently has two high end nodes in production, 10nm and 14nm.
Like TSMC their 10nm node is mostly used for smaller chips and is very similar in density and performance.
Their 14nm node is what is used for higher preforming chips like CPUs and GPUs, like TSMC it is said to be based on the previous 20nm node but with finfets added.
Next on the map for Samsung is their 8nm node which is an improvement on their 10nm technology.
Their 7nm node is set for 2H’19.

Further reading on Samsung nodes:

Power Architecture in enterprise datacenters

Dunno if this is off-topic (looks like this is intended to be more of a market thread.) But I found this video long ago that actually describes how silicon chips are manufactured in more in depth manner than “How It’s Made” or whatever.

Said vid:


Not at all, it’s supposed to be mainly focused on the big three foundries but information like this is great.


Something I want to contribute here, it’s really in depth


Also a forum/site I like to follow for semiconductor related news. It’s quite niche.


Huawei announced the Kirin 980 SoC which is built on TSMCs 7nm, this gives us a look at clock speeds compared to the Kirin 970 which is built on TSMCs 10nm. The 980 is up to 2.6GHz while the 970 is up to 2.36GHz.


So I did some quick math on the density of TSMCs 7nm process.
They claim 6.9 billion transistors on less than 100mm² die area.
If we assume the die is 95mm² their density would be 72,6MTr/mm² which is quite a bit less than Intels claim of 100MTr/mm² for their 10nm node.

In comparison, the Kirin 970(TSMC 10nm) has a density of 56,9MTr/mm²


Die area is not really directly usable to calculate density anymore.
This is because often times some chips are made of stacked wafers in a multi layer board.


Should be pretty comparable at the same Foundry at least no?
Ie. TSMC 10nm->7nm


Yes but not really compared to intel


What I’m curious about is when we are going to hit the point that transistor density is such that they’re too tightly packed to scale up in clock speed any more due to inability to shed heat (due to physical proximity), despite the improvement in power draw (and thus heat) due to smaller process size.

I think that given anything since 22nm has been capable of 4.5 Ghz on the intel side, and AMD were hitting 5 Ghz on 22nm (or was it 28nm, even?) but can’t on 12 or 14nm that maybe we’re at that point now.

Or put another way:

  • clock speed scaling is done (and this is why core counts are exploding)

smaller process for better power consumption: sure
smaller process for more transistors in same space: sure

… but the days of getting higher clocks through better process are, i think over.

Then again, we hit a wall at 3 Ghz about a decade and a half ago for a while…

I know back when i was in high school (so… early 90s) there was talk regarding moving off silicon and onto Gallium Arsenide, but that never happened for bulk production (i think cray were using it at the time). Yet?


Back when I was a kid there were books on how cars of the future would have gas-turbine engines and every house would have a portable nuclear generator.

In the 90’s people where already designing the tombstone to put on the grave of CISC, RISC and the fancy new VLIW would dig the hole.

On cooling I think we will see the cpu and cooler more integrated like we see on household a/c units were the engine/compressor are bathed in refrigerant and sealed in a round steel sphere. Maybe inside it’s own container that functions as a heatpipe, several chips with just enough space to allow coolant between the layers.


We aren’t hitting thermal limits on Ryzen. It’s silicon stability limits.


Yeah but still, intel have been stuck around the 4.5-5ghz mark for what… . 8 years now? Since 32nm or at least 22nm?

Either way, clocks don’t seem to be scaling, whether its leakage/silicon stability/heat/etc. and smaller process doesn’t seem to be getting the wins it used to.


Yeah, but have Intel really been working on it, or have they been sitting around, enjoying the profits of releasing minor improvements year over year?

Additionally, I don’t think there’s much of a use case for 5GHz or higher.



Nah, seriously, i think there’s a use case for as many gigahertz as an OEM can provide.

Programmers are lazy and threading is hard. If they can run a single thread faster, there’s a win there IMHO.

That said, i believe the tide has definitely turned and there’s a definite requirement for any platform or app to be many thread aware now.


Yep. That’s why I’m moving off of developing python. It’s got a GIL, which prevents concurrent execution across multiple threads.

I think having more cores is more important than hitting higher clock speeds. Yeah, fps in games is capped, but we already hit 144+ with ease, how high do you want to go?


Intels first 10nm product has lower clockspeeds than the 14nm equivalent.

Will be interesting to see what clockspeeds the 7nm GPUs end up having.


Since this is a longer life thread I will start with a month and see how that goes


True, but it isn’t just games.

There are plenty of business applications running on decades old code that don’t thread well.

We run one here where i work. It’s our ERP system. It is threaded to handle many concurrent users, sure. However some of our reports or processes peg a single core for long periods of time because whilst the system as a whole is multi-threaded, various tasks within it are single thread only.

We can’t do anything about it without the developer fixing it, and the cost to change away from it is likely in the low 7 figures.

So… more megahurtz = more better

I’m sure we’re not alone.

In general, sure. Throw more cores at it. But i think there will most definitely be a demand for high clocks (or at least better IPC whilst maintaining clock) in at least niche applications for years to come.