How can you tell if the AI isn't hallucinating in these matters? It might be okay for summarizing texts, but for everything else, not so much.
that's part of the bizarre future isn't it, how human subject matter experts have gained such an unreliable reputation that people are rather willing to take the risk of some potentially hallucinating AI over trusting someone 🙂
Would love to find an expert in ai/physics who is ok to put up with every stupid and random question i have at odd hours of the day
I can pretend to be one for you and make all the answers up
i'm sure he also knows i can give an unconventional answer that will involve electromagnetic forces and quantum effects