Building pytorch from source

I’ve spend 2 weeks trying to build pytorch from source since I have a Vega10 gpu (gfx900_64)
which is supported by ROCm and pytorch but I can’t get the precompiled package for rocm from pytorch to work so that’s why I’m building from source

can ANYONE get this thing to work

I want to dip into AI and machine learning with my upcoming GPU with ROCm support (to do computer aided diagnosis research) and now I feel quite hopeless because if you can’t get it to work, what more can a lowly pleb like me do in the face of this kind of adversity :rofl::joy: :sweat_smile: :weary: :cry: :cry:

I’m a linux noob, so I’m not exactly the gold standard

I had thought pytorch could just be downloaded as a binary? I havent really checked because my GPU hasnt arrived yet…

Well if you have a newer GPU, maybe you’ll have better luck but for Vega users
The prebuilt binaries haven’t worked since early 2021

That sounds rough. Good luck in tinkering! I hope the answer stumbles into you somehow.

Which version of ROCm are you using? IIRC, newest versions (5+) do not support Vega (or any other consumer GPU).

I’m not sure how well older versions would play with pytorch. Have you tried their docker images by any means?

Anyhow, it’d be nice to know how far you got in the build, and what errors you faced.

I’ve tried 4.2 and 4.5 and every 5 release

https://docs.amd.com/bundle/ROCm_Release_Notes_v5.0/page/About_This_Document.html
I’ll try a 4x release again
https://docs.amd.com/en-US/bundle/Hardware_and_Software_Reference_Guide/page/Hardware_and_Software_Support.htm
my mi25 is currently flashed as a Vega FE but I can flash it back to a mi25 if that’s really a problem of it being “consumer”

vega 10 should have full support regardless if its consumer or not under 4.x

I’ll try tomorrow, AMD repo is not responding at the moment so I called it a night