Learning about GPU's

I want to learn about GPU's. My desire to learn is motivated by two things: first, I'm just curious. Second, I don't have the funds to build a computer right now, so I want to spend the time between now and whenever I procure those funds learning as much as I can, so that no matter what cards are on the market, I can make an educated decision about what card will suit my needs.

Speaking of which, these are my needs:

  1. I hardly ever game, and what gaming I do is mainly older stuff (currently I'm slowly playing through Half Life I)
  2. I do a generous amount of virtualization. For instance, right now I have a pfSense network with 5 clients virtualized while I test different configurations before I implement them into the school network I'm building.
  3. I do some video editing, and I'm slowly ramping up how often/how much I edit.
  4. I'm am solely on Linux. I completely left Windows a year ago after three or four years of dual booting. I will not go back to Windows. Ever. Period. However I do switch distros roughly ever three months.
  5. Down the road, I plan to start implementing Blender animations into my videos (emphasis on down the road).

I want to learn what makes a GPU good at these things specifically, however, I also want to learn as much as I can in general about GPU's.

So:

  1. What do you think is important to know about GPU's?
  2. If anyone can A) Help me out with an answer, or just some knowledge you thing is useful, or B) Direct me to a resource that I can study for myself, that would be great.
  3. In every field of IT/CS/whatever, there's knowledge that everybody wishes they'd learned sooner, the sort of knowledge that opened your eyes to something that had been confusing you for a long time. Anything like that would be awesome too.

I'll end this with a question: I hear a lot of talk about VRM's, so

  1. What do VRM's do/where I can I research them
  2. How do you determine if a GPU has... Good VRM's? <-- if that's how one should phrase that question.

Thanks in advance, I look forward to learning.

*Disclaimer: This thread is to augment my own personal research. I'm not just leeching, I'm looking it into all of this as well as reaching out for help.

Wendell forgive me, here's a techquickie about VRM's: Motherboard VRMs As Fast As Possible.

The best way to learn about anything is to toy around with it. Get started with OpenGL or OpenCL, whichever you fancy. Just dealing with the APIs will teach you about the underlying hardware and many tutorials provide additional insights as well.

A few key points about GPUs:

  • GPUs are massively parallel devices. Unlike CPUs which have few cores that are tuned for rapidly processing sequential calculations GPUs focus on parallel compute. They sacrifice clock speed and latency for throughput.
    This means that any program on a GPU has to deal with what are essentially threads. If you've ever dealt with threads on a CPU you'll know how painful that is. GPUs have thousands of them - high-end nvidia GPUs don't even reach their full potentiall until ~10k threads or so.
  • Cache, cache, cache! Some backstory first: Compared to the registers found in processors RAM is extremely slow. Whenever any processor (GPU or CPU) has to access RAM dozens to hundreds of clock cycles pass before the data arrives. To mitigate that processors use high(er) speed caches that allow faster access.
    Computations are cheap. RAM access is expensive.
    Here's the point. A CPU has several megabytes of RAM for just a few cores. That's hundreds of kilobytes to several megabytes per core. GPU's enjoy no such priviledge: There's a few megs of Cache for thousands of threads. Basically every threads has a few Bytes worth of cache. Key takeaway:
    Keep RAM access coherent. As long as all threads access data that is next to each other all is fine, because if fits into the data cache. But do random accesses all over the place your performance can drop by an order of magnitude.

Sorry for the very basic explanations, but I don't know how much background knowledge you have.
I've also learned a lot about hardware just by building my own processor in a physics simulator. Highly recommend, if - and only if - you've got a few weeks to waste :wink:

One more thing. I recommend developing on an AMD card. Nvidia cards are less problematic, but only because they stretch the specifications. If your code runs on AMD chances are it'll run everywhere. If your code runs on nvidia, it's probably buggy as hell, but nvidia took pity on you.

Plus nvidia is a dick company anyway. Perfectly unbiased fact, obviously.


If you take up OpenGL I recommend learning version three. This thread contains some more information as well as links to good resources.

1 Like