Working with these LLM model just makes you more humble. Even with billions of parameters, these models still fail at very simple tasks and lack true reasoning. On the other hand, our human brain, with far less data, can accomplish exponentially more than these billion-parameter models. It's not even close. We still have a long way to go. https://image.nostr.build/0e4994253605b27d8ca077540c7b5156c2a9a64ef37182fce27889fc5c28ce5c.jpg
Working with these LLM model just makes you more humble. Even with billions of parameters, these models still fail at very simple tasks and lack true reasoning. On the other hand, our human brain, with far less data, can accomplish exponentially more than these billion-parameter models. It's not even close. We still have a long way to go. https://image.nostr.build/0e4994253605b27d8ca077540c7b5156c2a9a64ef37182fce27889fc5c28ce5c.jpg e.nos.lol
That's pretty remarkable...😳 BTW, how many "r"s in "remarkable?"😜🤣😂
Just asked my dad how many r's raspberry has, he said 1, then 2. Then when I said 3 he said oh. Then I read the last bit and he didn't think about strawberry. To be fair, I have a visual cue, he was blind. Just saying that our brains are just as fallable as AI.
Venice.ai got it right. https://image.nostr.build/be3ddb1232e61a2fcbb58d70a2a595b7db7c7fb55605527f7ac8b9b158c03b30.jpg I asked it a second time and it got it wrong 🙃 https://image.nostr.build/78ab607f904c51f4a54e76c9af544903b702d9b70b769f9a4e8dcd059faa6c67.jpg
The strawberry test is quite iconic. Many models are secretly "hard-coded" to avoid failing it so they don't appear flawed, but it highlights some fundamental weaknesses in LLM architecture and core limitations. Playing chess also reveals these flaws.
The fundamental difference is that these models can't REASON they are simply predicting one word after another through matrix multiplication. We possess something far more interesting—an intelligence that is one of a kind. Not to mention consciousness, which is something entirely different that we don't even partially understand, let alone replicate. Nature is the greatest building, it's not even close. nostr:nevent1qqs9zv8f9xlzx6f04wxdvqhy4aynjksfmrhm74l3qcxh42evd5p3pnqpzamhxue69uhhyetvv9ujumn0wd68ytnzv9hxgtczyrr0wpmlz6va2r8e92t990ltl7kqtlrgg2u7uwgs38v4nw9dt4y06qcyqqqqqqge9k0sh
It’s a problem with the transformer-based architecture. They are basically feed-forward networks where the “neurons” all face the same way, input layer -> inner layers -> output layer. They need a new design for more complex reasoning and eventual consciousness. Maybe liquid neural networks? Not sure what it’s going to be.
Yeah they are still pretty stupid but fascinating at the same time.