Oddbean new post about | logout
 @Vitor Pamplona this is really retard way to show the comments under the notes. Very confusing. At first I thought it's a quote. Just FYI https://void.cat/nostr/0962627ba3406af4c27c1ab3a352982e2b53f588eb00d5bb8be2f34a795d5517.png 
 What should be the way?  
 Def not the quote like replay.  
 The biggest client devs should come up with the idea and make it the standard.  
 Well, keep thinking. We are all open source. The community needs help to think through, redesign and implement changes like these.  
 On damus it was much more clear. Also the relays settings you over complicated. No way normies will be using your client 
 Well, 200k you users can't be only devs :) 

To the best of my knowledge, Damus doesn't show the post people are replying to in that screen. That's definitely not better.  
 Okay, how about click on note and then show all of the comments. Your client visual interface built from the dev eye 
 Isn't that what it does already? You can click on the note to see the thread.  
 yeah I go back and forth on this. do we do it like twitter and just put inline replies in threads? I had that on damus web and it cluttered the timeline a bit, but definitely adds more context. I don’t mind the quote block thing that you do 🤷‍♂️ 
 I have a feeling that we will end up with an LLM-based summary. Something like:

@1bf4f5f2 compared reply designs
@Vitor Pamplona mentioned having the quote is better than not having
@jb55 replies: <post>
 
 I already put together a demo of this! I plan on adding this to notedeck. Just need a bunch of supporting code like downloading and running the model. LLM excels at summarization, even dumb ones, so it makes sense. 
 Can it run on the phone directly? If the phone already has the thread loaded, then it could easily generate that summary. I tested Gemini on Android and it works, but it gets quite expensive and it's not private at all. 

Future is bright. 
 Yes the model I tested on was phone sized! There are some models “smallish” (2gb) with parameters that can be sampled with memory available in phones (in theory). I noticed summarization quality was quite good even on very small models. Notedeck is desktop though so i will likely try there first. 
 Nice! I wonder how well phones will handle these models. Android pushes apps to be under 512MB. But maybe they can offer a single, larger model that is pre-loaded at all times for all apps. Similar to how they do translate models these days. 

But yeah, I think testing on the desktop will be the way to go for now. Things will change so much on phone libs that I am not sure if it's worth trying to something like that now on phones.  
 This is the future fren. Damus will be my all around web buddy if local llm prechews some of the hard to swallow bits for me. Will even buy the new pixel (calyx) and figure out how to disable BLE just for this. make sure you leave room for custom llm endpoints though since some of us run private remote llm to phone and would like to continue. Just a thought. great news though. keep grinding 
 I am migrating to graphene and therefore using both Damus and amethyst and it’s a really hard call how to design this out. Both have positives and negatives. It’s not as easy as one might think. Been pondering for days on what best thread methods