Adventures in coding using AI

,

Overall very interesting that mistral/codellama gets the structure correct, but generates code that does not compile. I wonder if FORTRAN/MATLAB code was used as part of the training dataset. If not it, might be a good use case for fine tuning.

I found that openhermes on my 2070 worked well for simple python and Linux commands

Recently I came across twinny, a vscode/codium extension that gives me a github copilot like interface and uses ollama as the llm

I like to use jupyter notebooks to test and write code, so any extension that replicates github copilot chat in works is ideal