Oddbean new post about | logout
 I won't speak for nostr:nprofile1qqs2tkttddq55y5ht07gpun8y8yq2vrs3lhqq79mj58vgx90fyaknpgprdmhxue69uhhyetvv9ujuam9wd6x2unwvf6xxtnrdakj7av6hxm but I'm of the same opinion because I don't really find AI useful for me. Code generation AI is trash, I'm much more efficient without it. And to ask it questions for researching a topic is also a waste of time because you don't know if it provided factual information, omitted something key or just flat out fabricated the response. I end up needed to fact check it anyway. I do like it for exploring some creative ideas, for example asking for ideas to start a project or something and using it as a launch pad from there. Or for summarizing an email or something I've written (I don't trust the summaries generated for text others have written). But I rarely ever do those things.  
 I couldn't agree more. Hallucinations is truly the biggest problem with these models—you can never be sure whether the response they provide is correct or completely made up.

Even for coding, I prefer snippets more than AI, especially when working on serious app. 

And with the current architecture, it really seems like things will keep getting more and more centralized.

I think some sort of breakthrough is required.  
 Yeah a breakthrough or a re-thinking. I'm no AI expert but based on my understanding of LLMs, this is not what will get us to AGI. It's just  n-dimensional regression or find the pattern in the data. And don't get me wrong, it works pretty well but we keep anthropormizing them, claiming they "know" or "hallucinate" or "remember". They do none of these things, much like my line of best fit through a chart does none of these things. It's a large equation for predicting the next word when giving other words as input. We need to be able to add to a model understanding but how we do that? No fucking clue.