Can amd make "CPU" with HSA?

Its been along time since i posted and i hope someone can help me with this.

Edit: Summery can i use an i7 9999k and a R9 990x and have HSA/Huma. Is HSA/HUMA relegated to the APU?

            *** and yes "relegated" at lest until we get 8+ core 1024+ shader APU's

 

 

     I want to know heterogeneous system architecture can be applied to a traditional CPU.

Is I understand that HSA, relies on AMD's technology that allows the CPU and the GPU to accesses the same bits in memory at the same time. allowing the CPU and GPU to work on the same task without coping everything ten thousand times over. I also believe that the CPU and GPU housed in side the APU is contented via pci lanes not magic nor, voodoo. Also, the memory is just shared from the CPU to the GPU.

Is HSA /HUMA restricted the the APU with some inherent structure that facilitates HSA or can AMD separate the two. Allowing there new CPU's (I hope) and GPU's to power an enthusiast/ server machine.

There are some design restrictions that I can see. With the apu as the model; a system with a dedicated cpu and gpu would have to have the same memory source. So, A 4gb GPU can't meet the ram requirements for anything more than a tablet in the near future. I know there is no standard for Gddr5 as system memory, although the ps4 is a thing.

Could AMD build a platform using (Hynix) stacked memory (not integrated into the PCB), or GDDR5 to meet the needs of servers and professionals. Thus bring HSA/huma out of the realm of HTPC?

(*** early documents on HSA said that it would causes a slight delay for the cpu and gpu but the time saved from coping the data was greater than the time lost except when dealing with very small tasks.)

 

PS i ant no good with me gramers so please don't go nipin at me.

I can't really say whether AMD could or not. I feel like there just might be too much latency what with the pci-e bus.

But.. what benefits from HSA at the moment? I haven't heard of much that does.

It does what the F-35 program does...

 

Makes promises then delays said promises, but we stay hooked and follow along just hoping it won't happen again. Honestly HSA is a idea that will really never happen from the looks of it right now.

opencl 2.0 uses hsa for all thing opencl. such as rendering, trans-coding video, games, and more

Obviously they can, just that we need devs to make programs for it. Which I speculate it will only happen when Intel decides to roll out HSA.

hsa can do a lot of things, but the software development it's not on amd hands...the problem is taht it relies on openCL and both are in their infancy.

What you described is gpu acceleration. HSA makes the gpu and cpu use the same memory, so that it isn't cached in separate place. It basically makes them work very close together. Can they make it work with a standard cpu gpu set up? It seems unlikely to me. Either the information would need to be cached on the gpu (which is back to standard gpu acceleration) or the gpu would need direct access to the system memory (ddr3). The latter could feasibly work, maybe, but that seems like a lot of latency, and I am not sure how easily it could be implemented in the real world.

You seem to be running into the same issue in your thinking as I do in mine. I think that HBM 3D Stacked Memory tech can fix the issue. HBM says that 3d stacked memory uses less power than ddr4 and is faster than gddr5.

=>???Is there a standard, or is there a standard in the works, for HBM memory as system memory? I know in their markiting they amid it at servers.

It would be interesting to buy a gpu with 0gb on board memory and it would remove limitation of one gpu's vram for crossfire/sli. you could have 32, 64 or more gb of ram for your gpu's

In the words of Logan bring on the SKYRIM MODS!!!

This artical Below talks about standards of HBM with Nvida GPU's

(and points to the r9 390x to have 8gb ram because each chip has 2 channels)

http://www.cs.utah.edu/events/thememoryforum/mike.pdf

This article ( below ) points to apu's using HBM memory in the near future.

http://seekingalpha.com/article/2309075-amd-the-apu-and-high-bandwidth-memory-maintaining-graphics-and-total-compute-performance-leadership

I would really love to see gpus and cpus both using system HBM. The only limitation there though is the price of the HBM and the added latency for the gpu. If those things are addressed, then we have some real improvements in the computer's overall architecture as well as an increased end-user configurations. I really don't like not being able to add ram to gpus (though price gouging is a concern at that point, as we see with Apple's increased memory is out of proportion with the price of the added tech, I would hate for that to be a reality in the pc world as well). So yeah. IT could be glorious if price and latency are non-issues. I really hope that that is what AMD is thinking as far as the future of the desktop pc world goes. They do tend to be ahead of the curve in general.