Yes the technique you are describing is fine tuning, where you take a pretrained model, feed it some new example input/output.
This is a great hands on guide that lines up with your usecase
This reply suggests unsloth, a fine tuning library
https://www.reddit.com/r/LocalLLaMA/comments/18ysntg/comment/kgd7jps/
Which uses QLoRA
Alternatively, if the frappe already has detailed documentation, and all you need is a chatbots to retrieve and explain the documentation, then you might consider the retrieve augment generate (RAG) technique.
This guide is for the huggingface documentation
Also, there’s another thread with a similar use case