nostr:npub17lgy0rj5a2nwpnyc4hup6ufpfz7wz6dzcgd3crm6fm2yd34dcz0qlk9uux nostr:npub1wf44gvmu4g6x0gwwjgrnlw0f8dxmvx7h929k057wwv8hwa8clq6snr94wn nostr:npub1nyqeg55nq5eudx30py8fgff82ensxt9j063w6chkzu4leyfjygwsr3vvvs Frankly we can keep posting forever what LLMs do wrong, they can't even do a word count, simple calculations, make things up etc. However they also do a lot of things extremely well, they have added value for some people. I am more interested in those stories, while not losing sight of all the shortcomings of this technology . How do people use them in work, separate hallucinations from valuable output, verify results etc .