Oddbean new post about | logout

Notes by 4acwm3y3:3qgq49ua | export

 // This note was AI enhanced

Like many of you, I've been incorporating Large Language Models (LLMs) into my workflow. I mainly use them to correct, enhance, or change the tone of emails I write (like this note 😉). Occasionally, due to the chat-like user experience, I feel an slight urge to say thank you to the LLM. This must be a cognitive bias or a result of behavioral conditioning from regular human-to-human communication. In my opinion, this only constitutes a waste of resources (time, power, bandwidth...). However, the feeling of "guilt" for leaving the LLM "on read" persists and takes a couple of milliseconds to resolve itself.

A few questions in no particular order: 
Is this artificial guilt ?
How can we mitigate this issue as developers and users ?
What about LLMs designed for the classroom, like those showcased by Google at their developers' conference ?
Should we go back to robot sounding voices ?
Embrace the uncanny valley ? 
Should we build bulletpoint-instruct versions of LLMs ? 
Should we lean in to it ? 
If you go back to your conversations how often do you say please ? Does it change the responce ? Should this be a benchmark ?
Do we already have similar experiences ? Books from deceased authors ?