I have been looking high and low, I am not finding any good results on the wibbly wobbly web. Has anyone setup a test ramdrive using your gpu gddr ram?
I would love to know where you found the answer so I can give it a go.
I am running a amd Radeon 6000 series gpu, with arco linux, (I usually use arch linux wiki for everything.) Can some one point me in the right direction here, I am at a loss.
new territory here for me. I want to start utilizing my gpu for more than insane graphics.
thanks in advance.
I have been digging into that a couple of years ago. Using VRAM as swap in particular (I had as much VRAM as RAM back then). In my case this “feature” of old doesn’t work with modern GPUs. At least that’s what I settled with back then. I may have missed something or newer cards and drivers allow this again.
Today DRAM is so cheap and I never bothered again with this concept. But keep me informed if you can do this with modern cards.
Yes I will. I am still digging into this. maybe tensorflow , ROCm may hold an answer. I dont know. I will post here if I have any solutions or breadcrumbs on this issue.
But I do agree Ram and NVME is very affordable so really not a need. I really wanted to do a test of ramdrive from chipset , nvme then ramdrive from gpu to see if there was any real difference. i did see a difference, in certain locations on my computer depending on locations, still working with this.
it is old tech, I played with ramdisk on my apple 2 e back in the day (1986). with a whole 128k with 48k to play with. lol I had fun with it. it didnt do much.
Literally one quick google away.
Type “use gpu as ramdisk” and you get tonnes of info. The first result  gets you started at a blink on Windows.
Type “use gpu as ramdisk linux” get you tonnes of info related to linux. The first result  looks ready made for use
More interesting questions: why bother? And what new tech that looks similar?
The minimum latency in PCIe bus is about 100ns. For each hop (such as switch chip/retimers/etc) that has to go through, add an additional 100ns. While a couple hundred nanoseconds is still fast, when compared to system memory it’s very slow. Doesn’t sound great for random access.
The fastest GPUs are still PCIe Gen 4 (?). With 16 lanes, that tops out ~32GByte/s. If your motherboard is only PCIe Gen 3, the theoretical throughput halves to ~16GByte/s. If you happen to have a lesser GPU (limited to 8 lanes), then further halving of those throughputs. Even at 32GByte/s, its bandwidth is way slower than system memory these days.
So there is a reason that the “memory bus” while looks like a misnomer in 2023 is still around. Can we have the audience to address the second half of the questions?
 GitHub - prsyahmi/GpuRamDrive: RamDrive that is backed by GPU Memory
 GitHub - Overv/vramfs: VRAM based file system for Linux
Awesome. Thank you. Dont know why I couldn’t find that.