been playing with automatic1111 and stable diffusion.
but to to honest my results were a bit meh!.
so im looking for a large language model thats open source and can be run locally preferably on the metal rather than in a docker or venv.
as easy to install as automatic1111 (i don’t wanna have to spend days hunting down requirements )
this happens to me every time at work (will be starring at code before someone else will suggest a fix). I like this expression, will need to incorporate it into my vocabulary.
Docker and venv (or conda for that matter) still run on bare metal (unless you run them inside a VM of course)! They don’t use any virtualization and you don’t lose any performance. At most disk use is higher and perhaps memory a hair higher too.
Running these models outside a virtual env is generally NOT recommended as it’s really easy to land in dependency hell. Each of the components require specific versions of various libraries to the point that it is inevitable different projects will conflict. Your stable diffusion will probably require different versions of e.g. pytorch than your LLM. If you install both in your OS environment you’re already stuck.
cheers for the clarification… i honestly though it was running in a vm in software when you ran a docker.
so wasnt running stuff in docker that used cuda. yeah i know i should hang my head