@4ebb1885 I'm not sure where we go from here - but in general, I feel like we should be defaulting towards to caution in a way that we very much are not right now.
One technical point that I do want to push back on - I don't see how LLMs evolve from their current state of statistical bins to a "model of the world", which they currently lack. Adding more parameters, and ingesting an increasingly more polluted input set just may not give any better results than today. /4
@640e31bb I agree with everything you just said
I'm not convinced LLMs can "model the world" if we keep making them bigger, and I'm very worried that we won't find a way to teach people how not to fall victim to science fiction thinking about what these things are capable of
That's why I talk about LLMs and not "AI" - I'm trying to emphasize that this is mainly about a subset of machine learning - spicy autocomplete- not about Data/Jarvis/Skynet