Why aren't GPU architecture chips used as CPUs?

I've recently watched a video about an editing build, and I never gave a second thought to how GPUs (particularly with CUDA based architecture) are used for accelerated rendering, and processing. So I ask, why aren't GPUs (or similarly designed chips) used as CPUs? Sorry if I'm at all using the wrong nomenclature for something, I've researched hardware a bit, but have never really got down to the science behind it.

I certainly don't claim to know everything, but I can at least try to get you pointed in the right direction.

CUDA and OpenCL (open variant) chips are designed for arithmetic. They're very good at it, and very fast, too. They can do many calculations faster than a normal CPU can. In that case, a CPU doesn't do just arithmetic - that is just one part of the CPU. (That's not to mention that most processors on the market have integrated graphics already, which uses OpenCL.) The CPU also handles a lot of non-arithmetic data and processes the instructions in the first place - that is to say, the CUDA cores would need the CPU in the first place to tell them what to do. Otherwise, they might just try to math all the things, and who knows how that would end up.

That said, if you ever hear of a CPU-dependent game, know that it's probably because it does most of arithmetic via the CPU, when it could be using the GPU instead.

They do different things. CPUs are very generalized. GPUs are great at what they do, but they can't do everything. Even if most of the horsepower is in the GPU (which it is, especially in a gaming or rendering rig), you still need a CPU to delegate. What you are asking for is essentially HSA which is AMD's response to the whole "GPUs are faster than CPUs" thing. It is a more generalized GPU acceleration technique and I believe that is what you are essentially wanting. Now the problem with it from there is that it has to be fully implemented into the software in question which is more dev time and work which isn't the easiest thing to get done.

Sure GPU are amazingly powerful for what they do but they lack the instructions that CPU's retain and same goes for CPU, they lack the instructions used in GPU's to render images.

They have been created for two separate purposes and have been optimized only for those purposes. The exception would be the iGPU in AMD's APU, or Intel's processors. They are very weak and will only suffice for simple task and some low end gaming.

I don't want to use a GPU instead of a CPU, I was just wondering why we don't use the more powerful of the two for both tasks, and now I know.

GPU are extremely parallelized processors and they actually preform poorly at serialized tasks. CPUs are serial processors

CPUs process a wide variety of instructions, while GPUs are basically ASICs that can only process specific instruction sets. GPUs need CPUs too feed them their instruction sets, and interpret the results, since the GPU cannot do this on its own. Sort of like a man and a horse. A man can pull a wagon, but he has to work very hard to do it. The horse can also pull the wagon, with ease and at high speed, but without the man, the horse cannot be attached to the wagon, and doesn't know where to pull the wagon to. SO, the man (CPU) sets the horse (GPU) up, and directs it where to go, all the while not really do so much work.

So I answered your question then? Glad that I could help.

+1 to that example.

CPUs handle data in a serial form so basically one task at a time in order but really fast. GPUs handle data in parallel form which can be many tasks at once to complete a certain task. Almost all software is designed to be executed in serial form so GPUs just can deal with that type of data. With AMDs HSA technology the bridge between the two will eventually be brought together so we can have the best of both worlds. Its just a matter of time for the rest of the computer world to get on board. As of right now I think only a few Linux forms of software can utilize HSA. 

I may be somewhat wrong here but from what I've read I think its close. 

I thought HSA was being used on consoles already? Isn't that how the shared RAM works?

5.Summary
The current state of the art of GPU high-performance computing is not flexible enough for many of today’s
computational problems.
HSA is a unified computing framework. It provides a single address space accessible to both CPU and
GPU (to avoid data copying), user-space queuing (to minimize communication overhead), and preemptive
context switching (for better quality of service) across all computing elements in the system. HSA unifies
CPUs and GPUs into a single system with common computing concepts, allowing the developer to solve
a greater variety of complex problems more easily.

Taken from here: http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2012/10/hsa10.pdf

I don't understand the science behind it either, but looks like they can drop latency a lot. Has me wondering what AMD will do with Mantle in the future since they focused on overhead for lower end cpu's. 

I don't think so. APUs have been using physical memory  as vram since they came out. What gives consoles an egde is they have there own API or direct x. The API on consoles is much like AMD's Mantle as its "closer to the metal".

It looks like they are using hardware as well. Or at least plan to. My head hurts, I am going to watch the football game now. 

I thought that the HSA killed off the need for 'vram' since both cpu and gpu could get information from the same spot. I dunno, I am trying to simplify this so that I can understand it. 

IMO I don't think software has caught up to hardware yet. Just look at the games coming out right now. Most are un-optimize POS coded in C++, Visual Basic or net framework. That isn't even ready for more than 6 cores (Dragonage: Inquistion) or doesn't offer SLI/Crossfire support like AC: Unity.

The future would be on die GPU/CPU like the consoles. They would have faster response times since they are on a single die and also have higher bandwidth than say bottlenecking at PCIE 3.0 speeds.