New to local AI and diving in with a Famework desktop

My 128 GB Framework desktop was delivered today, and I’m trying to decide how to set it up to play with local LLM’s etc. (Probably just LLM’s at first.) A few additional caveats, I don’t have one of the nice Level1Techs KVM’s and buying this thing has blown my cool gadgets budget until probably the middle of next year, so once I do the initial setup, it’s getting tucked into a corner with power and Ethernet, where I will just access it over my home network. (Fortunately, it’s tiny and light, so moving it somewhere to work at it locally on occasion isn’t the end of the world, but day-to-day, I’m looking for headless operation.)

As stated in the title, I’m new to local AI. (Also, might as well call my cloud AI experience extremely minimal, superficial, consumer-level stuff.) On top of that, my Linux experience is… fairly minimal and mostly outdated.

Context out of the way, 0th question: Windows or Linux? Linux. Next question.

First question: Which distro? On occasions in the last decade or so when I’ve wanted “a Linux box” I’ve gone with Ubuntu. However, from what little research I’ve done so far, I can tell for this use case, I’ll probably want something that keeps closer to the bleeding edge. Fedora and Arch are the two most at top-of-mind for me, but I’d be willing to consider other options.

Second question: What software should I use to run the models? LM Studio? Llama.cpp? Since I’m new to this (and from what I’ve seen, it seems like this doesn’t change, or may actually get “worse” with experience), I’ll probably be doing a lot of hopping from model to model and adjusting parameters. For recommendations here, keep in mind though that I want to run the system headless, and ideally I’d prefer not to have the Framework desktop spending any resources on a desktop GUI.

Third question: What other questions am I forgetting to ask?

Thank you for your consideration. (And I apologize if this is already answered well somewhere. Feel free to throw a turorial link at me and tell me to go away :stuck_out_tongue_winking_eye: )

What are you hoping to do with AI… I mean other than the obvious putting AI in the AI until the AI don’t shine.

At work I just got a sub for Claude Code which made me set up VSCode to use it in our product development.

I found an extension named “Continue” that actually let me use my local install of Ollama as my provider. It was slow as heck on my small dGPU on my laptop.

So code generation is my use case. What do you want to do with it.

I also want to work out how I can use it to help me form the folder and filenames of my Plex library more or less automatically.

The two things that are currently coming to mind are to use it as an element of a local voice assistant, along the lines of what NetworkChuck did here: https://www.youtube.com/watch?v=XvbVePuP7NY
And to use it for TTRPG’s. Potentially as an AI DM, but much more likely, as an assistant for the human DM (combating writer’s block, coming up with flavor text, etc.) Ideally, managing to keep everything that’s happened in the campaign in its context window, but that last part is probably a pipe dream.

1 Like

I’d suggest to use LMSTUDIO and hook it up to MCP tools via docker desktop. This way your local LLM can do things like search the web, potentially use tools, etc.

This is pretty trivial on the Mac, on windows and Linux YMMV; I haven’t tried it there yet.