Oddbean new post about | logout
 Honestly speaking, we don't really have any good solutions right now. It is super centralized, with virtually no privacy.

There is hope that Chromium browsers might start supporting local LLM models. 

Chrome ai docs: https://dev.to/grahamthedev/windowai-running-ai-locally-from-devtools-202j) 

Meanwhile, I have built a web app using WebLLM and WebGPU, where you can run any LLM model locally in your browser without any performance compromises. (use pc, for larger models) 

https://nostr-local-ai.vercel.app/

Right now, I am just waiting for the right technology.