Oddbean new post about | logout
 @9709757e @953ba58b “Hallucinating” is the wrong way to think about it. The LLMs generate text. They have no access to facts, or reality. They generate text based on the corpus of text on which they were trained. Often the result is plausible, and sometimes it corresponds to the truth, but that’s only a matter of probabilities. It can’t ever be relied on. 
 @58fbd252 @9709757e "Hallucination" has become a term of art,. Personally, I'm fine with it as long as we recognize -- as the term obscures -- that chat AI *always* hallucinates, for the reason you say. It's just that more often than not, its hallucinations turn out to be true. 
 @953ba58b @9709757e I know it’s a term of art, and I hate it. Exactly for the reason you say: it obscures the fact that the LLMs are detached from reality always and by design. It is meant to suggest that this is abnormal or exceptional behavior.