Oddbean new post about | logout

Notes by ff1d01fe | export

 nostr:npub1vvrrvvmm3a336424gmvlmw7mna9ga6dl2wg5xnyd497utzxzpu3sc06g60 nostr:npub1yl37a83lkqwwzdd6... 
 @957492b3 @27e3ee9e  yes, my understanding is that the models that run locally on consumer hardware have been  trained on specific domains or using compression techniques and thus, smaller.  But, tbh, I am on the AI/ML learning curve (I’m a UXer) and ingesting a lot of the the tech and terminology for the first time.  

Also interesting: 

https://github.com/KillianLucas/open-interpreter/

https://openinterpreter.com 
 nostr:npub1vvrrvvmm3a336424gmvlmw7mna9ga6dl2wg5xnyd497utzxzpu3sc06g60 nostr:npub1jkymmeeazh5r5e39... 
 @957492b3  @27e3ee9e 

the OSS LLM world is thriving! And, lots of people are working on getting these working locally without melting the machine :)

Check out https://bootcamp.uxdesign.cc/a-complete-guide-to-running-local-llm-models-3225e4913620

I don’t know quite what the term would be other than “local LLM” - however, regarding ML models with very specific purposes running on little boards, look up TinyML.  Thriving community there as well. 
 It seems obvious to me that silicon will continue to improve to be faster with much more storage.... 
 @957492b3 @9589bde7 

People are running LLMs locally. You might be interested to look at 

https://python.langchain.com/docs/guides/local_llms

Useful Sensors (founded by some of the DeepMind folks) are working on specialised boxes - running OSS LLMs on low-cost, low-power hardware with strong data privacy. 

https://usefulsensors.com/

Founder/CEO Pete Warden recently wrote about the economics of it, framing it as an inevitable shift from training to inference:

https://petewarden.com/2023/09/10/why-nvidias-ai-supremacy-is-only-temporary/

#localLLM #LLM #UX 

h/t @27e3ee9e