Oddbean new post about | logout
 @63063633 @9589bde7 @27e3ee9e 

This is amazing thank you! I expected OSS versions but not so quickly. Do we have the terminology to discuss the differences in these models? For example, I can imagine a small LLM running locally but something like Bard or ChatGPT 4 would likely be significantly larger and more CPU intensive. 
 @957492b3  @27e3ee9e 

the OSS LLM world is thriving! And, lots of people are working on getting these working locally without melting the machine :)

Check out https://bootcamp.uxdesign.cc/a-complete-guide-to-running-local-llm-models-3225e4913620

I don’t know quite what the term would be other than “local LLM” - however, regarding ML models with very specific purposes running on little boards, look up TinyML.  Thriving community there as well. 
 @63063633 @27e3ee9e I'm familiar with TinyML, it can run on RPis but they are significantly smaller than the industrial ones running in the cloud.

I realize there may be no clear ways of measuring this yet but even using model size as a proxy, I've heard that ChatGPT 4 is in the terabytes.

Of course, it's not even clear we WANT something as comprehensive at that for many local tasks but I would expect even a reasonable language model is likely to be pretty large 
 @957492b3 @27e3ee9e  yes, my understanding is that the models that run locally on consumer hardware have been  trained on specific domains or using compression techniques and thus, smaller.  But, tbh, I am on the AI/ML learning curve (I’m a UXer) and ingesting a lot of the the tech and terminology for the first time.  

Also interesting: 

https://github.com/KillianLucas/open-interpreter/

https://openinterpreter.com 
 @63063633 @27e3ee9e 
This is very exciting. I've been looking to volunteer my time to an OSS project and something in this area sounds very interesting as technology like this always has UX issues.