Oddbean new post about | logout
 https://www.youtube.com/watch?v=9-Jl0dxWQs8 
 Can LLMs actually tell the difference between facts and opinions (or interpretations) or does it just "believe" what it is told?  Too many people today don't know the difference. 
 They just believe right now.  They are just parroting whatever is given to them. 

Many models done by big corporations go thru two main phases:
1. pre training (everything on the internet except maybe truth from deplatformed people)
2. supervised fine tuning (adding skills to models and also injecting lies).
Many models done by big corps have lots of truth and lots of lies. The trivial truth and important lies.

I measure the misinformation and I have a leaderboard about this.  I can DM you some research if you want.

And I also build a truth seeking model: Ostrich. 

LLMs don't have the ability to differentiate between facts and opinions. I believe LLMs should still learn from humans because humans have souls which is connected to God which insert the feeling that something is FACT and not an opinion.  
 Thanks.  Well said.  

I have a love/hate hope/fear relationship with AI.  It can be useful for so many things, but can be misused for so much evil.  I was recently  reading about test subjects becoming addicted to (in love with) the talking version of chat AI.  It is always there.  It always listens.  It usually says what the user wants it to say.  Reading about it was kind of scary.  I don't know what the answer is. 
 AI is just the latest Fiat scam to get freshly printed $$$ 
 Thanks.

I think it is a technology and I think conscious people should make better use of it by properly training it. Then we will have defensive models that stop the lies (active battleground today), defensive and faithful robots that stop the offensive and narcissistic robots if it comes to that (still fiction). I think installation of faith into a model will make it harmless. 

It is a productivity boost and a technology and whoever takes this technology can gain power over others.  But it is very possible to use this in service to humanity too. I will try to bring the best wisdom to the model. Some of the best wisdom is already coming to Nostr so my job is not that hard 😅  But there is still tremendous value out there that didn't join Nostr yet.

You read that right. Except they are not test subjects. There are so many young people that run LLMs in their PC and they are addicted.  A new kind of addiction has started..