Yeah a breakthrough or a re-thinking. I'm no AI expert but based on my understanding of LLMs, this is not what will get us to AGI. It's just n-dimensional regression or find the pattern in the data. And don't get me wrong, it works pretty well but we keep anthropormizing them, claiming they "know" or "hallucinate" or "remember". They do none of these things, much like my line of best fit through a chart does none of these things. It's a large equation for predicting the next word when giving other words as input. We need to be able to add to a model understanding but how we do that? No fucking clue.