Want to play with home AI, what's a good platform for that?

Well, I’ve never given much of a crap about RAM speed on my server either, but I’ve read that it matters a lot for LLMs. That said, my CPU maxes out every core I throw toward those things anyway, so who tf knows if faster RAM would get me out of slow jail.

I just run openwebui on the host in a docker container - I didn’t move it over to a VM. The models vary a lot in how much ram they take - from 6 gb to like 30 on the ones I’ve tried. I guess just in case I ever get a GPU, I won’t have to try and pass it, though…

1 Like