some llms can identify bot generated text. if you use a smarter llm it may block dumber llms. bots can be reported with kind=1984 if they are not properly tagging their posts/profiles there could be 'prove you are not a robot' over a NIP or something. sheer number of bytes / sec may give them away. messages looking similar may give them away. proof of work could slow them down.
It’s make sense!