Oddbean new post about | logout
 nostr:npub1j46f9va5qdrmamcwmncew7g8mwekgma386dhws6k04fsnlkqpcpsj23gm7 nostr:npub1yl37a83lkqwwzdd6qj7clagncs543pvs2uwkzjyzmzkgwhnu6dhq6ey6aq  yes, my understanding is that the models that run locally on consumer hardware have been  trained on specific domains or using compression techniques and thus, smaller.  But, tbh, I am on the AI/ML learning curve (I’m a UXer) and ingesting a lot of the the tech and terminology for the first time.  

Also interesting: 

https://github.com/KillianLucas/open-interpreter/

https://openinterpreter.com