How does a CPU get its core clock?

*Noob Question*


I understand how a CPU is made through the process of silicon wafers. But how can a core clock on a single CPU be set? Is it just that the core clock speeds advertised are not true for all processors of that same family? Or is the core clock implemented after the production of the processor? If the answer is the later of the two, how do they do it? 

Actually a very interesting question, i didn't think of that aspect. I thought of how exactly does your motherboard know every CPUs name, but that seems like a completely different ballpark

It used to be through the FSB, where the clock generator (a chip on the motherboard) would tell the FSB how many Hz it needs to be in order to be in sync. Back in the day, FSB meant that memory and CPU ran at the same base clock but different multipliers. After ddr was introduced the ratio wasn't 1:1 anymore, so you could have a 3.2 ghz pentium 4 on ddr of any kind (not just 400mhz for 1:4). The FSB was actually much lower, something like 100, the multiplier was high ~30. After Nehalem launched I have no idea what happened with the FSB, or what QPI is (quick path interface). It seems it's following the same basic rules but it's called differently. The memory controller has also migrated to the CPU, something AMD had done a long time ago. Anyone care to pitch in and correct me if I'm wrong?

Also that's the reason SD-RAM became obsolete, because it severely limited the clockspeed of CPUs (133mhz max FSB). If pentium 3 had ddr, it would have ripped pentium 4 or thunderbirds to shreds, and the market would've progressed much faster.

Eternal regrets, intel.

The Pentium III was the first x86 CPU to include a unique, retrievable, identification number, called PSN (Processor Serial Number). A Pentium III's PSN can be read by software through the CPUID instruction if this feature has not been disabled through the BIOS.

 

 

Wiki - Processor Serial Number

So the core clock, it is an oscillating value of values between 0 and 1. Basically how fast a transistor can turn on and off determines the clock rate. This is because power either takes time to build fully to a definite 1 or deplete to an exact 0. The higher quality silicon takes less time to respond to voltage changes and thus can be turned on and off faster (higher clock rates).

Core clock is implemented at the end of the production process. The processor is tested for the highest frequency of which the manufacturer wishes to achieve. Any chips that don't meet that specification are binned for lower speeds.

When a processor is marketed at a set speed (eg 3.0GHz) all processors with that tag will be capable of at least a 3.0GHz clock. If it isn't capable of that speed (eg if it only tests stably to 2.9GHz) it will be tested at a lower clock and marketed at that lower speed (eg 2.5Ghz) although the chip is capable of going higher.

As far as I know the clock signal is still generated off chip, and the CPU multiplies the external clock signal to get the clock for core, bus ring, memory, and so on, using a phase locked loop.

Outside of its CPUID, the CPU has no sense of how fast it can be clocked, or how fast it's running.  It relies on the external clock signal for its sense of time and if you give it a false clock it will not know.

 

Well I have my answer now! Thanks guys and gals for all the information!