It's not possible. End of the discussion even admits it.
In the source @Super Testnet says he doesn't really know, ask a cryptographer. Also I have heard this before, so I really don't know. AI says it is not possible.
What AI says should not matter under any circumstance whatsoever in any discussion which isn't specifically about what AI systems would say.
I agree. I used that Venice.AI tool and asked what the largest word from BIP39 was and it told me it was "abandon" and then after some prompting it told me the largest was a 10 letter word that doesn't exist in the list. Maybe it's just that one, but I am thoroughly unimpressed. It may have been the same AI that told me that someone with access to the same dice could replicate entropy so dice rolls are not safe for generating private keys. If an AI has become sentient and is acting maliciously, this is it.
Context: nostr:nevent1qqspux2vrt5ppfdv0ndfw0l2guejal3l76yzu64nlfs9h0e0sn8f7pqpremhxue69uhkummnw3ez6ur4vgh8wetvd3hhyer9wghxuet59upzqrvhh6h9vl7we8r9wncudmmpym4fd82fjtp3nrj3crav2tzjwjs5qvzqqqqqqyqr2lev
You shouldn't use systems of this sort to ask questions and then believe the answer. You shouldn't believe any answer by anyone, let alone an AI system, unless it's a reputable source or the answer contains a good argument or you can verify it. AI language models are useful for exactly that: modelling language. For example, they can change the style of writing of a paragraph. Even in that case you should verify the output every time, and it will be wrong many times, but it can still be useful with a decent success rate.