Oddbean new post about | logout
 what's the best LLM to run from your local machine without the need for a heavy GPU? 

Llama et al. all seem to require a chucky GPU, but surely we're at the stage (3 years later) that we have some local LLMs?