@957492b3 This by @12f6653a https://www.theregister.com/2023/09/13/personal_ai_smartphone_future/
@7906d04d @12f6653a thanks! The article discussed putting smaller versions of LLMs on a phone, that's a first start. However getting one tuned, both to device tasks and to the user's life feels like the next big step. Having it running on a home server has even more potential
@957492b3 @7906d04d Tuning is moving even faster than the LLMs are these days. LoRA/QLoRA means it will happen on-device; but that means the device has to be collecting the data to use for fine-tuning. Which is a whole other way of thinking about what the device is doing.
@12f6653a @7906d04d I would think there are two types of fine tuning: Domain tuning (giving it the right general skills/knowledge for my device) and Personal tuning (giving it the right skills/knowledge for *me). Obviously interrelated. My point being a phone can do so many things a chat interface can't so tuning it to dial the phone, open apps, and generally do 'phone things' seems a domain goal. However, tuning to my friends, word choice, preferences is very much a personal one.