Intel vs AMD: The Core Wars (Some Random Thoughts from Taipei) | Level One Techs

Interesting. I only (loosely) made the connection because they say you have to agree to Terms and Conditions, which today generally always means giving up the goods.

I wonder what AMD will do with those Intel chips, personally leaving them out for a dog to shit on would be pretty cool.

1 Like

Testing maybe? I still wouldn’t waste good silicon.

Try and reverse engineer to improve their own products? :wink:

No. It was Jim Keller who delivered. When he finishes his new micro-architecture Intel will be ahead of AMD again.

The device says “AMD” on it, not “Jim Keller Inc”

In 5 to 8 years?

I like the idea from other forum to make keyrings for AMD employees with engraved sentence “I have competition in my pocket”.

3 Likes

That easy, just half-ass enforcement of processor security boundaries.

For intel definition of “improve IPC”…

2 Likes

VIP and I agree.

We now have pretty awesome cpuS like the 2700x & 8700k, but are very circumscribed about what modern HB resources like; nvme/10Gb lan/usb 3…, that we can attach.

We are ~precluded from nvme raid e.g.

We have gone from being accustomed to having 4-8 once modern sata storage ports, to 1-1.5 of the new nvme storage devices.

I cant help thinking many who think they have a cutting edge platform for years to come, will rapidly out grow it or wish they could.

Its an unbalanced system, like a V8 car with bicycle wheels.

The average end user has no requirement for NVME RAID. Nor 10 gigabit (average user, remember).

My X470 taichi for example has 2x NVME slots, still has those 8 sata ports that i can stick SSDs into. It currently has 3x SATA SSDs not in RAID and it is plenty fast for normal end user tasks.

Just a few years ago most systems were shipping with spinning rust, which is orders of magnitude less responsive and 20% of the speed of even SATA SSD.

I get it, it would be nice to have the bandwidth, but for the mainstream platform, the vast, vast majority of end users simply won’t use those things. They certainly don’t want to pay for them.

This is what the HEDT platform is for. To pay for the stuff the 99% of people won’t need or use.

You aren’t precluded from NVME RAID anyway. 10 gigabit networking and PCIe hosted NVME SSD can both be added via cards if/when required. Very few people run multiple GPUs these days and both vendors have been trying to push people away from it. 8x PCIe3.0 is fine for 99% of cards on the market.

Worry about 3-5 year’s time in 3-5 years.

The next rev will no doubt have PCIe 4.0, USB++, More NVME / Less sata, etc. You’ll want a new motherboard at that point for various reasons.

NO platform, even threadripper or x299 will be “cutting edge for years to come”. I regularly see sustained load on all 8 cores. I do not often see sustained disk IO (and i’m not even on m.2 yet). I’m not sure what you’re doing to run into platform bottlenecks just yet, but maybe HEDT is for you.

“the die is just a quad core. Unless they RYZE the core count”

Rot.

If you understood Zen/Fabric/MCM architecture, you would realise 4 core ccx is unchangeable. Its about more ccxS, not bigger ccxS.

Says who? Wether you are an engineer working at AMD or have other inside info, you have to provide some proof.

1 Like

ANYTHING can be changed.

I suspect they’ll go to 6 core CCX units with Ryzen 2, but only AMD likely knows that.

Vega uses infinity fabric, and that doesn’t even have any CCX units.

Infinity fabric is a packet-switched bus. Adding more units to it is relatively trivial.

As per wikichip, it doesn’t have any specific topology limitations:

1 Like

6 cores per complex with 3 channel memory controller would make sense in the future.

Either that, or more likely (IMHO) dual channel memory using DDR5 or faster DDR4 and bigger caches (to alleviate the bandwidth contention due to more cores).

Consumer platform has involved adding memory SIMMS/DIMMS in pairs for best performance since the early 90s at least. I’d be surprised if that changes.

AMD could even put some local HBM on-package as L4 cache to alleviate memory bandwidth concerns if they had to.

That will turn into a nightmare for the Kernel to keep track off.

Not sure of the implementation specifics, but yeah. I do very much doubt we’d see triple channel on the consumer platform (and thus, per die on a larger package) any time soon.

Most of the workloads that need more than dual channel memory really aren’t consumer desktop focused… triple channel consumer boards would be significantly more expensive, etc. for a limited user-base who need it. I’m sure AMD would rather up-sell them to threadripper or EPYC (which is still cost competitive vs. intel).

Remember, AM4 is supposed to be a 5 year platform and i don’t know if the pins are available on socket to support more memory channels. It’s also expected to be cost effective for things like APUs and low end quad cores, etc.

edit:
aware i’m mixing consumer platform here in this discussion, but essentially the consumer platform is just one die of EPYC. If they change the CCXs or dies to have more memory channels it will probably affect all of their sockets… i reckon they’ll just ramp up the speeds (or somehow increase the caching/buffering between the cores and the socket). That way they can remain pin compatible.

Architectures change.

4 Likes

Stumbled on
https://www.smarteranalyst.com/analyst-insights/technology-stocks/advanced-micro-devices-amd-analyst-assess-implications-cisco-epyc-adoption/

Interestingly, Cisco positions its first EPYC-based UCS, the C125, as offering 128% more core density at rack level vs. its Intel offerings and 71% more than its Intel blade offerings. Cisco’s move to a Rome based 7nm EPYC next year, for example, would take its 2U core density up by 50% from 256 cores to 384 cores.”

Does that suggest 7nm EPYC will be 48 cores next year. I was guessing 64 would be possible.

1 Like