Oddbean new post about | logout
 One of the coolest and creepiest questions for LLM research is how many prompts the LLM takes to figure out exactly who the user is.

Most LLMs can already identify the user's gender, age range, education, and socio-economic status from prompts alone and reply accordingly. Current accuracy is about 80% after 6 prompts.

More interestingly, how should the LLM reply if it wants to trigger the user into revealing who they are so that the LLM can answer questions more precisely. 
 eww.  
 this sounds incredibly dangerous. important in FOSS LLM to ensure that enough constraints are placed on bad actors who might use this nefariously.  
 thats why u need to run llms locally
1. to know that nobody can be in between you and the model
2. to know that you are not provided lies
 
 agreed 
 BotSprayLLC.COMe & getit b4 the bots show up @ your door/         lol/lfg! 
 AI is the most dangerous thing man has ever developed... 
 Worst thing ever?

Me: chemical warfare

Top comment: my phone doesn’t last very long. 
 Nuclear, chemical, etc...everything else pales in comparison to AI. 
 no. AI in this 60 years old form is nothing but glorified search 
 Irrational, angry and intelligent animals with powerful weapons is the worst thing ever 
 This is why The Matrix used programs and agents in the form of humans