@957492b3@27e3ee9e yes, my understanding is that the models that run locally on consumer hardware have been trained on specific domains or using compression techniques and thus, smaller. But, tbh, I am on the AI/ML learning curve (I’m a UXer) and ingesting a lot of the the tech and terminology for the first time.
Also interesting:
https://github.com/KillianLucas/open-interpreter/https://openinterpreter.com
@957492b3@27e3ee9e
the OSS LLM world is thriving! And, lots of people are working on getting these working locally without melting the machine :)
Check out https://bootcamp.uxdesign.cc/a-complete-guide-to-running-local-llm-models-3225e4913620
I don’t know quite what the term would be other than “local LLM” - however, regarding ML models with very specific purposes running on little boards, look up TinyML. Thriving community there as well.
Notes by ff1d01fe | export