Waiting for local LLM is new coffee time for devs. Used to be compiling.
I was just looking at setting up ollama today. Is this de wey?
Meh. Depends on machine. Im running a M3 MBP and it's fast enough. ChatGPT is probably better but I don't use it
I tried local (Ollama/Llama) with Code plugins, but I'm finding cloud service like replit much better. Whole codebase context and much faster. Replit agent is nuts.
Was thinking exactly the same today whilst waiting on my AI slave.