Oddbean new post about | logout
 nostr:npub1d62z0nl8twfw37nrdr3cfrr66pq8a3nclmmkqp6prrtqgjjen85spvtshf the way I think of it is, if you ask a generative LLM to make a five paragraph essay about how Tom Sawyer is actually an anarcho-communist, you're going to get five paragraphs that look like they do that. 

it won't actually argue the point. and it definitely won't argue the point using supporting research you found in some books of literary analysis at the library. 

(why did they make us do this in english class, anyway?)

It might invent a citation that states something like that argument, which isn't real. 

It's emitting text that looks like what you want. 

This seems to make more sense with graphical output - a picture that looks like garfield but an oil painting of a real orange cat, is a picture that looks like garfield but an oil painting of a real cat. 

A fake argument with nonexistent citations? that's just HARM. that's unintentional misinformation.