Oddbean new post about | logout
 Test new Llama 3.2 1b/3b on any device locally with the web app.

I have multiple apps that use this WebGPU implementation. Decent small LLM models were the only constant, but now we finally have the model.

https://chat.webllm.ai

https://image.nostr.build/ecb588bd79fe9de4b61a8ea8bf6bfb8463bcd9fd8bce38350455a1b0fb8566bb.jpg 
 Would be fun to try having it power some game NPCs. 
 *constraint