Oddbean new post about | logout
 Venice.ai and select the llama3.1 model. Great option for a big model that you can’t run locally.

Otherwise a local llama3.1 20B is solid if you have the RAM