Can anyone recommend a Large-Language-Model

been playing with automatic1111 and stable diffusion.
but to to honest my results were a bit meh!.

so im looking for a large language model thats open source and can be run locally preferably on the metal rather than in a docker or venv.
as easy to install as automatic1111 (i don’t wanna have to spend days hunting down requirements :slight_smile: )

so, you guys got any recommendations?.

Oh I don’t have a high enough trust, links are coded out ooo now I have been upgraded

Been following along (my work has me primarily working with the open ai api), I’d suggest

wendell did a guide

Or anything on the top of hugging face’s benchmark

Pick a model that performs well on your use case (ie image generation, code generation, or chatbots)

Bonus: I’d recommend looking into ollama, it’s a app that makes it easier to download, manage, and deploy open source models locally

I actually came across it through

2 Likes

lol how cringe is this. ive been on huggingface for the past week and didnt see the links…
to busy looking at the wood to see the tree’s i guess.

anyways thanks.

1 Like

this happens to me every time at work (will be starring at code before someone else will suggest a fix). I like this expression, will need to incorporate it into my vocabulary.

1 Like

Docker and venv (or conda for that matter) still run on bare metal (unless you run them inside a VM of course)! They don’t use any virtualization and you don’t lose any performance. At most disk use is higher and perhaps memory a hair higher too.

Running these models outside a virtual env is generally NOT recommended as it’s really easy to land in dependency hell. Each of the components require specific versions of various libraries to the point that it is inevitable different projects will conflict. Your stable diffusion will probably require different versions of e.g. pytorch than your LLM. If you install both in your OS environment you’re already stuck.

1 Like

cheers for the clarification… i honestly though it was running in a vm in software when you ran a docker.
so wasnt running stuff in docker that used cuda. :astonished: yeah i know i should hang my head :wink:

now i can, run stuff in docker that is. :slight_smile:

1 Like