Oddbean new post about | logout
 Heard positive things about https://ollama.ai/library/phind-codellama 
 Is there any advantage in using running that for you instead of you running it locally for this use case? 
 I don't have enough RAM, and even when I try with similar models for my scarce RAM they are slow