Seems Threadripper and Epyc are based on the same chip

Of course they don’t NEED 4 dies.

The thing is that they HAVE four dies.
They HAVE Epyc that uses the same size package and socket
the HAVE the opportunity to use the same production line to make both TR and EPYC for efficiency in manufacturing.

AMDs marketing for a 1950X threadripper is not trying to sell you a dual die CPU, they are selling you a 16 core component. It makes absolutely no difference how that is delivered under the spreader

1 Like

ya I don’t get why that was a surprise, I knew there would be 4 dies as soon as I saw epyc and threadripper together, AMD is keeping it as “simple as possible”.

I do think however that at launch and probably for a while they will be using full 8 core ccx’s for TR with 2 dies that are either just passthroughs or complete duds, so I do not think the answer from AMD was untrue. I agree with you that for launch they will be physically disabling the other dies and anybody stupid enough to fuck up a 550 to 1000 dollar chip trying to get more cores out of it will be shit out of luck. Down the road when they have a ton of 6 and below core bins, and this will be a while considering they are running at like 80% quality yield they may start to use those 2 additional die slots, but for now I just don’t think they need to so I think you were right when you said they are possibly cutting up scrap wafer material to use for the place holders.

They certainly have that option to use 2 x 8 core dies or any other combination. Typically in any electronic product the first run is different to the runs that follow so you may well be right.

Wait until the masses have found the next pointless thing to scream about and they can revert to microcode and no-one will notice. They just need to be careful not to degrade the performance and introduce bugs like Nvidia did with the 1070 when they switched to Micron memory with the controller bugs while continuing to provide reviewers with only samsung cards.

2 Likes

LOL leave it to Nvidia to find a way to mess up a really pretty good gpu.

Based on rumor, they have so many full 8 core dies that are beyond min spec for 100% bin, they had to delay ryzen 3 until they just decided to put full dies on it anyway and because the base cost of manufacturing for the size of die they are still making a huge profit on a full 8 core ccx cut to 4 core with no smt at 110 bucks. But that is rumor, we wont know exactly until the financials come out next year. Intel is just shitting themselves because of the high level bin rates AMD is getting by going with such a tiny die at 14nm, I really just can’t wait till the new 7nm takes hold with zen2, what the hell are they going to do with all the extra space hopefully they will be able to increase the clock to 5ghz and totally clean intels clock, imagine a 32 core 64 thread 180w tdp mcu running at a 5ghz boost, that you can just drop into your existing motherboard, lol.

Shameless thread plug

1 Like

Nvidia have fixed the micron bug now so it isn’t a problem any more but it was for the first 5 months of ownership. I can easily do a 21000+ graphics score in firestrike in an i7-2600 rig so mine works pretty good.

I know that they have said they are getting great yields. I had not heard that they were sacrificing 8 core chips. It really doesn’t matter if they are sacrificing 8 core chips as the ones they are sacrificing are from the over 100% yeield and are really a bonus anyway.

7nm should be interesting. Get closer to 5Ghz or improve already good IPC further and either add a multiplier on the data fabric or a second interconnect to the memory controller or start supporting quad channel memory, even if it is only one Dimm per channel so that it removes the interconnect bottleneck to the memory and solves the gaming limitations would make a great spec for a mainstream cpu.

2 Likes

Likely overclocking headroom will not improve much under FinFET. AMD may catch up to Intels clock frequencies though, but current draw is already at it’s limit with FinFET.

GAAFET like IBM is using is one way around that, but… It’s IBM.

AS I understand… and I do not know for sure, but the 4 cpu’s with 2 disabled are there for the pci lanes 64 i believe… if this is the case… then those other 2 that are disabled, probably just a code shut off… The cores themselves have to be at least passively operational for the pci lanes to work, that is the signal has to at least go through.

No, Epyc has 128 PCIe lanes. They don’t need the dummy cores at all in TR.

If you look at the pics that de8auer took, @Kevadu is correct, you will notice on the pics that only 2 corners have the outside transistor/tiny chip packs on TR while Epyc has them all the way around the mcu, so the 64 pcie lane connections into the fabric would appear to come from the upper left and lower right of the chip/socket for TR, while the 128 pcie lane connections for the fabric on Epyc SR3 seem to come from all four corners/quadrants, so that would be 32 pcie lanes per corner/quadrant (total of 4 for 128) of the mcu on epyc and on TR4 they just have 2 quadrants with connections for 64 total pcie lanes.

That is the limitation and reason epyc mcu’s are not compatible with the TR4 socket, pcie is limited to the opposing quadrants, of course that does not prevent AMD from connecting the pcie lanes into the mcu from any direction and routing is all completely chip/microcode based so AMD can pretty much do or change what ever they need to if performance can be enhanced and/or also as the product stack matures. The simplicity, elegance and flexibility of AMD’s approach to modularity is extremely refreshing.

Imagine when Navi is released, they could drop 2 navi modular dies onto threadripper and create an HEDT with 16 cores - 32 threads and onboard graphics that are beyond the level of current add in graphics cards. A high end workstation with vega 64 x2 performance all on one mcu. All for the low low price of $1199.

the 128 PCIe lanes are only relevant for the Epyc two socket chips. The extra 64 lanes are used for the inter socket communication, they are not available for use in user devices. A single socket Epyc chip will only enable 64 lanes to the end user.

It is easy to work out what the maximum potential spec of any of these Threadripper/Epyc chips are.

Each Rectangle of silicon and the package has 4, can contain two CCX modules with four cores. One dual Channel Memory controller and one PCIe/IO controller that supports 32 PCIe Lanes.

Ryzen 7 uses one of those rectangles. 16 core Threadripper is using the components contained in two of the rectangles and Epyc is using the components contained in all four. The way that Infinity Fabric chips have been engineered though, means that for a Threadripper chip, as long as you have two memory and two PCIe controllers plus four CCX modules, you can place them in any of the four silicon rectangles and the performance will be exactly the same.

32 Core Epyc will neet to use four fully populated silicon rectangles (dies)

@epicbastion Single socket Epyc CPUs only have 64 PCIe lanes available for the end user as well. assuming that by looking at the caps on the package is not safe to draw that conclusion. The differences in the external single socket Epyc connections and Threadripper socket connections are related to Memory Channels.

Epyc has 8 channel memory support while TR only supports 4 channel memory. That is where the socket incompatibility comes from.

It certainly does leave itself open to Threadripper like products with navi GPU/HBM dies on board. None of the motherboards coming out now support it the way x370 does though.

Maybe there is future potential for a machine learning CPU? Buy the AMD Skynet CPU and you don’t need to spend money on an extra Nvidia P100 style GPU.

Ummm, I am pretty sure that Epyc has 128 pcie lanes in both single and dual socket chipsets. the single socket has 128 on its own and then the dual socket uses 64 to comm between sockets and then each socket has 64 for external for a total of 128.

I stand corrected. I saw something that had an AMD guy talking about it about a month ago and he was saying that the 2nd 64 was only enabled for the second socket interconnect. I just has a look at the specs and it seem that 7551P single socket cpu does in fact have 128 lanes enabled.

The 4 dies certainly provide the infrastructure to provide that.

Each Epyc Chip provides 128 Lanes on it’s own.

When in a MPC (Dual Socket) configuration 64 Lanes of each CPU are used for Interchip communication. (Infinity Fabric/ Scalable Data Fabric).

Dual Socket

Epyc ---- 64 Lanes IF ---- Epyc
 |                          |
 |                          |
 64 Lanes                   64 Lanes

Single Socket

Epyc ---- 64 Lanes
 |
 |
 64 Lanes            

I told you that AMD were lying about dummy dies. Disabled yes but not dummy silicon. At a guess, the disabled dies are most likely from the reject pile rather than top binned ones

Isn’t this just semantics? Disabled/trashed/garbage/blank might as well be the same thing.

I think even if not active some connections are passing through those disabled chips.

The arguement was that this wasnt an epyc chip with the dies disabled but a unique chip with virgin silicon when in fact is is basically an epyc chip with two dies disabled.

The package PCB may still be different as well as the pinout. I guess we’ll have to see if someone can stick an Epyc chip into a TR4 board and see if it works, or vice versa.

There are 16-core Epyc parts, although I don’t know if anyone buying Epyc would bother with this.