On the most recent live broadcast, Wendell mentioned using a fully local AI as the voice interface for Home Assistant. I’ve been playing around with this myself for some time using Home LLM (github dot com/acon96/home-llm) and various Llama models (including some trained specifically for Home Assistant) but my results have not been encouraging.
A fully-local natural-language voice interface is the holy grail of smart home engineering for me. Anyone have tips on setting this up, or know somewhere to read more?
I guess you mean that your prompts don’t result in an action you specified? That’s where I’m at as well.
With the new Samsung Phones their Gemini/Bixby thing seems to be the first AI assistant that should be doing what we would like in HA as well. But give it some weeks in peoples hands and we’ll see how good it actually ist.
AI seems to be great at speech to text, text to speech, but not at understanding the text yet. Look at Apple, they disabled the notification summary thing because their AI misunderstood things. When the big companies finally found a model that works, hopefully we’ll have one as well. I doubt it’ll be the other way around.