GlobalFoundries abandoning 7nm, 5nm and 3nm

https://www.anandtech.com/show/13277/globalfoundries-stops-all-7nm-development

Looks like GlobalFoundries is dropping their 7nm, 5nm and 3nm R&D/pathfinding operations due to a so called “Strategy Shift” claiming that it doesn’t make sense to invest in bleeding edge nodes with no plans to use them in the near future.

GlobalFoundries has been having financial difficulties over the last few years and this R&D cut appears to be part of a belt-tightening operation. We’ll see if it will pan out for them in the long run.

The problem with adopting a new process is twofold; When a new process is rolled out, there are initial yield and other process-related issues to be worked out. Additionally, new bleeding-edge process nodes cost multiple billions of dollars to develop.

With AMD leveraging GlobalFoundries for silicon, I’m curious what this means for 7nm Zen. I wager we’ll see AMD moving over to TSMC.

I don’t remember the specifics of AMDs contract with GlobalFoundries, but if they don’t have an industry leading node, AMD may not have to pay penalties.

Additionally, GlobalFoundries has contracts with US Government agencies to manufacture custom silicon. I’m not sure how that will play out and if they’ll be able to stay afloat because of that alone.

We’ll see how this works out, but I foresee the big customers like AMD leaving GlobalFoundries for other contract manufacturers that have the ability to build on new process nodes.

In the press release today, A representative said:

GF is realigning its leading-edge FinFET roadmap to serve the next wave of clients that will adopt the technology in the coming years. The company will shift development resources to make its 14/12nm FinFET platform more relevant to these clients, delivering a range of innovative IP and features including RF, embedded memory, low power and more. To support this transition, GF is putting its 7nm FinFET program on hold indefinitely and restructuring its research and development teams to support its enhanced portfolio initiatives.

8 Likes

This doesn’t bother me in the least. It’s about time we get back to the way computers used to be. And wring every bit of usage out of the current platforms we have now.

It’s litterally only a few years and I hear about a brand new nm die process come around. And I’m sitting here wondering, what about the last one? Are we just giving up on it already?

I swear alot of it has just been a dick measuring process, and most of us have been just swept up in the fervor.

Hell how long did the manufacturing process for 8086 last? Like 20 or so years? Everything’s been going by too fast. We need to slow down, and see how much more we can pump out of the previous process. I swear we’ve not explored enough of the older ones. But once the 90’s came up, and everybody started talking about moore’s law, seems like EVERYBODY decided to just emulate what was being said about moore’s law, and ran with it.

1 Like

Maybe, but we’ve made some significant power-efficency strides in the last 10 years.

Everyone’s been focused on efficiency over raw performance (not to say they don’t make improvements there as well) because that’s where the datacenter is. Datacenters want performance per watt because when you either build a DC or rent space in one, you only get so many KW per rack.

The datacenters want efficiency and smaller process nodes are the best way to get that.

5 Likes

So now it is just Samsung, Intel and TSMC going forward with 7nm, but only TSMC is actually at the production stage, while the other two are not.

Exactly.

3 Competitors Remain

We all knew that this process race was going to leave a lot of manufacturers in the dust, but to be frank, I’m not sure it’s a bad thing. Eventually, we’ll hit a wall where power efficiency is where we want it and they’ll start optimizing yields. This will allow other manufacturers to catch up.

6 Likes

The first 7nm Epyc were meant for TSMC from the start.
However this definitely means pricier wafers for AMD.

Is this even physically possible? I thought the whole issue with Moore’s law petering out was that you hit a wall because the electrons start hopping across lanes around 8nm. I get that maybe you can stretch that to 7nm, but I thought something as low as 3 was impossible.

1 Like

This but it also seems to be the investors/owners of GFs can’t be bothered anymore. They just want all the monies and they want it now. They could expand but it is cheaper just to only do 14/12nm then when that is all gone they’ll probably just close it or sell it off piece by piece.

The main issue it seems is 14/12nm is only available at Fab 8. That is where 7nm was being deployed. They don’t want to expand the facility so they’d have to reduce 14/12nm production and implement 7nm instead. That would lower profits in the short term and the UAE (owners of GFs) doesn’t want that.

I also wouldn’t be surprised if their process was in trouble. They missed targets and we have yet to see any chips they produced in the wild even though risk manufacturing should have started months ago and be currently ramping. So if GF dropped the ball again that could also explain it.

Apparently GF’s process was better. However, we never saw any actual working silicon so it remains to be seen if that was true or just marketing hubbub.

They already moved to TSMC. They planned on dual sourcing from the beginning. All chips we’ve seen so far from AMD on 7nm have been fabbed at TSMC. TSMC will likely just get all of the business now. This brings up the question of capacity. Apple is gobbling up all the chips and others use TSMC as well. We may see limited supply of Zen 2 chips around launch time. Could be a while before they are widely available.

Yeah the wafer supply agreement is being renegotiated apparently. IDK how they could keep AMD locked in when they are just out right refusing to develop processes they said they were going to for AMD. We shall see. This may finally get that awful agreement of AMD’s back.

The demand for 14/12, 28nm and older processes is still strong. For the next few years they will be fine. After that? Who knows. Prob be broken up and sold.

Am watching this with interest. Prices will probably go up and supply down. We shall see.

Ahh yes. We should stop advancing because “old stuff is better.” Right… Some serious rose tinted glasses going on here.

Actually, the change in process is slowing down. We have been on 14nm for over 4 years now and Intel will be building 14nm chips well into 2019 and even 2020… This “golden age” of chips you’re so fondly remembering doesn’t exist. Processes were changing about every two years really up until 2012 when 22nm came out and we stalled.

Also no we don’t just “give up.” Most of the processes are fully developed by the time they are replaced or it is more effort and expense to keep optimizing them rather than replacing them entirely with a new one. Also we don’t just throw them away. While they may not be used in the newest chips old manufacturing processes even 45 or 65 nm from 2006 are still in production building non critical parts.

It’s actually not. While the naming is silly and meaningless and all about marketing. IE 7nm and 10nm being the same thing essentially but named differently for marketing purposes. Actual die shrinks are really important and matter a lot. They bring tangible performance benefits.

Nope. It actually lasted about 3 years. The 3um process was replaced by 1.5um in 1982. 8086s continued to be built for many years but that was because they were used for non critical non performance parts. The same thing is true today as I mentioned. Old parts and manufacturing processes are still used. Just not in leading edge applications.

Things have been slowing down and yes we have explored old manufacturing tech and it is still used where applicable. But if you actually think a 45nm part would ever out perform a 14nm or 7nm one you’re living in a fantasy land. It would be slower and use a lot more power no matter the modifications.

The major reason we have the performance we expect today at the prices we have is due to die shrinks.
Moore’s law has been a thing since the early '70s and has largely rung true until today and we are better off for it…

There is no reason we can’t advance our manufacturing and get the most out of it at the same time. So not sure if your comments are coming from a place of ignorance and not understanding chip manufacturing, serious nostalgia blinders, or some other factor like not wanting to or not being able to afford the latest and greatest.

With traditional techniques and materials maybe but if we tweak the materials around it can be done but things will need to change eventually. Regardless though we’ll find a way to move forward. Computing power is essential to life as we know it these days. The demand will create solutions

6 Likes

My big fear is that radiation hardening won’t be able to save these chips when human spaceflight becomes a reality. There’s a reason why space probes only go as small as 130nm lithography… cosmic rays.

The smaller things get, the far harder it will be to prevent silicon damage. Just look at CMOS/CCD sensors coming back from the ISS…

3 Likes

I wonder if this is anything to do with the multiple different node size chips on a substrate idea intel were floating as a response to zen. Not having the cutting edge nodes is less of a problem in this case as you can specialise each individual chip and make them all function best as one, at least in theory.

@DerKrieger, I wasn’t at all saying:

stop advancing because “old stuff is better.”

I was saying that the race for higher performance has gotten so out of control, that before I can even get to the potential of the hardware I’ve bought, It’s already replaced by a system\process that’s leagues better and then everyone ‘‘pressures’’ you to buy the next best thing.

And that pressure has gotten worse of the years. I’ve been on many forums where people are dumped on because of the hardware they have, regardless if it’s been stated they can’t afford the newer stuff.

I actually do miss the days when hardware actually lasted you a while, and could be used through multiple generations of software, and you could rely on the fact that the PC you bought, or built would last you long enough until software became over whelming for said hardware.

But these days, it seems software is so overwhelming so quickly that it seems like you can’t win unless you’re constantly buying every new thing that comes out. You used to be able to purchase your hardware, and be secure in the fact that you had a good ten years with that hardware. But nowadays it’s more like 2-3 when you’ve gotta upgrade again.

It really feels like hardware isn’t given enough time to mature, and the software along with it. Because devs see the new shit coming and just start coding for that, making it harder for the slightly older stuff to compete.

It makes it sometimes very disheartening.

I dont want hardware to last ten years because then all new software has to work on legacy trash. Its a massive case of chicken and egg and neither will wait for each other and I sure as hell will take technology advancing too fast to keep up than keeping the same thing for a decade or more.

Its silicon, not a house. Furthermore, no one forces you to upgrade.

4 Likes

You also get the advantage of being able to work on each piece separately.

1 Like

If shit works for your use case you don’t need latest and greatest unless you want it otherwise keep what you got.

4 Likes

I’m assuming you’re an adult and you’re complaining about peer pressure? Srs?

If what you have works for you then that’s fine. Don’t upgrade and who cares what people think. If you’re not happy with your performance than do. Simple

Not sure what you mean by “multiple generations of software.” Most hardware works fine with most software unless it is extremely out of date.

Wait what? So is software advancing too fast or hardware? Which is it? You’re arguing both. As it stands tbh I’d say you’re wrong. Software is completely holding back hardware. Nothing is multi-threaded really well. So if anything making it work better on older stuff is the problem. Also hardware really hasn’t changed that much in the past 10 years. Intel was stuck on quad cores for how long? That held a lot of stuff back.

IDK what time that was. I remember back to the mid to late '90s and early 2000s. You had to update your system like every 6 months to a year because performance was advancing so rapidly. If anything it has slowed down. Hell an 8 year old Sandybridge CPU is still very competitive. So not sure what you’re remembering here…

Not really. I was on a 290/390 and an 8350 for a very long time. It still worked well. Then you have the Sandybridge example… 3 years is a pretty decent amount of time in the tech space btw. A lot can change in that time for the better. I don’t think that is unreasonable. At the same time too no one is forcing you. 3 year old hardware is still extremely competitive. An R9 290 or GTX 970 can still max most games at 1080p…

Not really. New stuff usually takes a while to gain support. IE Ryzen and Multithreading. All being held back by old hardware. Same with nVidia’s RTX when it launches. Not much will have raytracing support. Again old hardware is usually the focus. The status quo is what gets development because it is what most people have.

Or ya know it just isn’t very fast anymore or wasn’t to begin with…

It is what you make it. A new 7nm CPU will be faster. It doesn’t make you’re existing part slower. It’s okay if you can’t afford it or don’t want to upgrade. If you’re happy with your performance that’s fine.

I do agree that sometimes hardware and software don’t make the best use of each other. Software generally is poorly developed. Though that seems to more be, at least these days, of it actually living in the past. More open solutions, if adopted, would greatly help this divide. Not Linux but more open standards. Ala Vulkan, FreeSync ect.

Meh this was totally off topic. Sorry OP. Mods feel free to delete but just thought this almost Luddite attitude was a little silly.

5 Likes

From where I’m sitting at work, I see lots of datacenter demand for traditional CPU/GPU being shifted (more like gobbled up) by demand for custom ML accelerators. They’re just so much better than CPUs for ML it doesn’t matter they end up costing as much as the rest of the server and end up requiring custom datacenter infrastructure.

Demand for RAM, flash, and networking silicon (especially the 100G/400G variety, 200G so-so) stays strong. RAM/Flash/Networking are currently held back by trivial things like packet steering token management of the driver’s or number of PCIe lanes, and also just utter stupidity of low MTU on the internet.

I’m personally secretly hoping we ditch HDDs some time soon, ie. I’m hoping flash pricing and capacities finally match HDD pricing

Lol @Smerrills I am still running an AMD FX 6100 (bulldozer) and an R9 290. Still plugging along just fine. Sure I would like an upgrade at this stage but everything woks fine and still plays everything to satisfactory standard.

2 Likes

Taiwanese Semiconductor Manufacturing Company or TSMC has just confirmed that the mass production of their 7nm process node has just begun. The 7nm process would be used in new products which include orders from AMD too, who will be using the process to leverage their upcoming GPU and CPU hardware.

Also, AMD can select and use Fabs of both TSMC and Global Foundries to create their next-gen processors. This is made possible by Global Foundries using similar 7nm pitches and SRAM cells that are very close in design to TSMC, allowing Zen 2 7nm processors to be developed on either Fab without major differences.

Sounds to me that AMD uses only TSMC since GF backs out of 7nm

Oh yeah, that makes total sense, I am just saying for the non 7nm stuff. GF can probably carry on with the larger nodes and build their CPUs in different ways like multi lauger dies on a substrate.

If you need 7nm GF are not it currently.

I think they are only dropping their own research and dev. of new processes -

That doesn’t mean they cant use other companies technology’s for a specific process. They have been basing their process on 14nm Samsung process since 2016.

3 Likes