Oddbean new post about | logout
 Current AIs can't *think*?

As in: take information as a foundation and then use logic combined with this information to come up with answers to questions.

As far as I know current AIs are merely capable of producing texts that would have a relatively high probability of being uttered by a human, based on the training data. Maybe that is somewhere on a shared spectrum with thinking/reasoning but it's far away.

Or did I miss something and there are AIs that differ significantly from the LLM approach?