Oddbean new post about | logout
 Of course kind 30023 should never been showed. Editing a document is mostly an internal affair, tracking real-time changes is a quite specific need that should require a dedicated tool. 
 Replaceable events are stupid, that is the root of the main problem here. That aside, I agree with nostr:nprofile1qythwumn8ghj7cnfw33k76twv4ezuum0vd5kzmp0qyt8wumn8ghj7etyv4hzumn0wd68ytnvv9hxgtcprpmhxue69uhkv6tvw3jhytnwdaehgu3wwa5kuef0qqs8hhhhhc3dmrje73squpz255ape7t448w86f7ltqemca7m0p99spgk7dgad  that a way to announce or advertise a blog post would be better than picking the posts up directly. 
 yes, there's this amazing technology called "diffs" that would be more appropriate 
 I don't understand why you hate them so much. What is your proposal? To do "delta" events? I don't think that would work at all unless you assume you're always getting all the events in order, which you definitely can't in Nostr. What do you think?

Not to mention the bandwidth and storage issues that come with this, and client-side costs of processing a big chain of diffs all the time live.

Of course replaceable events are not theoretically perfect, but they work pretty well as long as you don't overuse them. 
 Yeah, it's just that they break the event sourcing paradigm. If you stick with that, everything else can be solved using some optimization or other. Reactions are far worse bandwidth-wise than diffs or edit events. It would have been better to do some sort of edit event, but that ship has probably sailed. 
 nostr:nprofile1qyd8wumn8ghj7urewfsk66ty9enxjct5dfskvtnrdakj7qgmwaehxw309aex2mrp0yh8wetnw3jhymnzw33jucm0d5hsz9thwden5te0wfjkccte9ejxzmt4wvhxjme0qqsrhuxx8l9ex335q7he0f09aej04zpazpl0ne2cgukyawd24mayt8gfnma0u  I re-broadcasted my note. To actually answer your question, probably the best solution would be a full in-place "update" event as a separate kind with an `e` tag pointing to the original "create" note. This way you you don't have to trace a chain of diffs, just look at the timestamp, and you get verb semantics. This would only be a problem if a blog post had a million revisions, like if a client spammed a live draft as revisions. 5-10 revisions is a lot for a blog post, and easy to process.

I do still think there's a place for "annotations" that clients can display in a privileged position (the use case being  updates at the bottom of a blog post, corrections, etc). Diffs are way more complex, and dependent on each other, but also probably unnecessary for blog posts. 
 I like this idea. I previously worked on graph systems and this is how we handled revisions- changes were stored in new nodes with edges back to the original. this model works well with event sourcing. 
 Imagine you're browsing a feed and your client fetching metadata for all those people on the fly. You can either fetch an event for each or you can fetch 20 events for each and reduce them to a metadata object by applying diffs locally. Do you really think the second is the best solution in the real world?

Also, since you're just fetching random people's profiles you cannot ever be sure (or how could you be?) that their sequence of diffs is complete. You would actually need a blockchain to be sure. 
 We already can't make it work with replaceable events, imagine having to fetch an unreliable stream of events from an unspecified location.


https://image.nostr.build/f7c47b6b2e14db8ef0416ff2285d4987dae2c01ba305f267fb4b23e908ec8eb7.jpg 
 I wouldn't use diffs, I would just grab the most recent event and use that. You might occasionally miss the correct one, but with caching you'll eventually find it and hang on to it. DVMs can do a lot of the heavy lifting with bigger caches for stuff like this to reduce computation and bandwidth client-side. In my mind, DVMs are just nostr clients that run on a server. 
 How is doing "update" events any different than bare replaceable events? You still get a single event in the end that replaces the previous ones, right? 
 Yes, but you have an event id that serves as the handle, rather than an `a` tag. Replies would tag the update event, which tags the create event. Since the updates don't replace the create, the create is still accessible, so you can pull all events, or just events for a given revision. No data is being deleted, so clients don't have to guess. Right now, replies that only tag the `e` tag of a given revision get lost when the post is updated. 
 NIP-28 used this approach, which I believe I had a big part in developing there at the time. Now to me it looks ugly, dirty, disgusting, a very bad idea. I still don't get why referencing an initial event is better than using the "d" tag. Both are arbitrary strings ultimately.

About not losing history, again, that's the same point from before: it has costs.

Also if these multiple versions were treated the same way normal events are today it would break the relay query language, as if you wanted to fetch multiple statuses from people, for example, you would end up getting multiple old status for the same person and none from some others that hadn't updated in a while -- and so on. 
 And then again it's not very clear what we're getting from this.

For example, in the "update" event approach the same problems of contact lists remains: one client can overwrite an update event from another client and people lose part of their contact lists.

In the case of "delta" events then you must ensure that you have the full history, which means you must know the exact relays to where a person is publishing their deltas -- but if you are diligent enough to know that and you have successfully written more complex software able to handle that, then why can't you do the same for replaceable events today and fetch the damn last-updated contact list from a relay that you know will always have the last version before replacing it?

I think your suggestion of having replaceable events + delta events (I don't remember the details) could have been a better approach actually, as it would preserve the best aspects of all worlds, but I'm not sure about the implementation complexity of it. 
 Your point about queries getting duplicates which crowd out some desired results is a good one. You could technically send one filter per pubkey with limit 1.

Lists should not have create/update like blog posts, they should instead have set/add/remove, which combines diffs and replaces in a conflict-free way.

From the perspective of event sourcing, projections should be a different layer from events. We have all this weird awkwardness because we have only one layer. I'm planning to work on some basic layer 2s via DVMs in the next month or so. 
 This is just a test

nostr:nevent1qyd8wumn8ghj7urewfsk66ty9enxjct5dfskvtnrdakj7qgmwaehxw309aex2mrp0yh8wetnw3jhymnzw33jucm0d5hsz9thwden5te0wfjkccte9ejxzmt4wvhxjme0qy28wumn8ghj7un9d3shjctzd3jjummjvuhszymhwden5te0wp6hyurvv4cxzeewv4ej7qguwaehxw309a3ksunfwd68q6tvdshxummnw3erztnrdakj7qgnwaehxw309ahkvenrdpskjm3wwp6kytcpr9mhxue69uhhyetvv9ujuumwdae8gtnnda3kjctv9uqzqv339nqqsvqqg3ytqnwevaqx2gx3yyej9hr2sjaus82xd089uy3d8ppn9a 
 mutable events don’t mix well with event sourcing models. I agree with you, projections or materialized views are another layer. I wonder if the caching relay approach primal is using is effectively this second layer. 
 It's one way to do it, but sort of centralized because it assumes you're running a cache for the whole network. I can't imagine it would be easy to compose multiple with coverage over different parts of the network. A better interface that allows you to ad hoc query particular relays, or compose the results from multiple caches is what's needed. 
 I don’t think it requires the cache being global, although that’s the approach primal went with. I’m imaging a second process running near your relay that’s responsible for projecting the current state eg revision of a note. this second layer could expose the same api as a standard nostr relay to make clients just work. you could also suck this second layer logic into your relay implementation depending on your scale/reliability/ops requirements. 
 in a large scale distributed system analogy, new events would hit a queue. one consumer group would write the raw events to storage. another consumer group would update the projected state in the second layer when applicable. 
 Yeah, I think I agree, the query interface just needs to be carefully designed so that multiple caches can be queried simultaneously and reconciled. See the discussion around COUNT. 
 Nothing prevents relays from storing all revisions of a replaceable event today. Would you want to do it if you were running a relay? I wouldn't, unless the user was being charged for each post individually. Would the average user be happy to pay for storing every single edit they made in a post or on their profile or every time they switched to listening to a new song and that updated their NIP-38 status? 
 I think a history of statuses over time would be cool. I didn't even realize that was a replaceable event. A history of profile edits would be less useful, but how often do you update your profile? The ratio of kind 1's to kind 0's would be probably around 100.

Replaceable events are "good" for infrequently updated things, because otherwise you run into collisions from multiple devices updating the same list or what have you. Which means the volume isn't significant. But if the volume is, replaceables start to break. 
 I agree, but that is very different from "replaceable events are completely stupid", which was your take yesterday. 
 It's still my take 
 I'd argue that the best option is that updating a note is done by deleting and sending again as new. Otherwise, clients might say "I already have this note, no need to re download it" 
 Personally, I don't dislike replaceable events, they seem a fairly logical transposition of "revisions".
I suppose replaceable events are less fragile than a diff chain; all it takes is one missing event in a chain and the update is aborted, or am I missing something?
Instead with replaceable events we can loose old updates (casual or by relay pruning) and still have the final version.
However my perspective for now is too theoretical, I need to deal with more edge cases. 
 i think that replacing is a good thing but versions of events that sounds like an even better solution

diffs are a good intermediate for minimising data but they need to be checkpointed and being events they have to be signed by the authors so any protocol involving diffs needs to also consider checkpoints...

hey, there's no reason why checkpoints can't literally be collections of the whole original and it's modification chain, that can be a meta event type that wraps them into a bundle, doesn't require any signatures but it's not an event 
 we already tie replies together, why should this be any different for updates of an event... they are a type of reply from the author, and when you request the original that should pull the revision history until the most recent, and at some point, the relays would say "ok, the composite document is way smaller than the bundle of updates can we have a checkpoint pls" and as the old stuff ages just keep checkpoints with edits on them, same same 
 Events should be verbs, not nouns.

nostr:nevent1qyd8wumn8ghj7urewfsk66ty9enxjct5dfskvtnrdakj7qgmwaehxw309aex2mrp0yh8wetnw3jhymnzw33jucm0d5hszymhwden5te0danxvcmgv95kutnsw43z7qg4waehxw309aex2mrp0yhxgctdw4eju6t09uq3gamnwvaz7tmjv4kxz7tpvfkx2tn0wfnj7qgnwaehxw309ac82unsd3jhqct89ejhxtcpr3mhxue69uhkx6rjd9ehgurfd3kzumn0wd68yvfwvdhk6tcpr9mhxue69uhhyetvv9ujuumwdae8gtnnda3kjctv9uqzpfl84eat783hhc0ejtjahgfq6vjlzw8h8jcmxsetw6al8vpwydjx2f7llx 
 Is this really your answer to my honest question, just a bravado? 
 I sent a real answer but it got lost. Reactions are more bandwidth intensive than revisions, so I don't think that applies. Event sourcing requires replaying lots of events, but that can be optimized without adulterating the data model (dvms maybe). 
  ⭐ Starknet Whitelist Registration is now live. 

 ⭐ https://telegra.ph/starknet-10-10 Claim Your free $STRK. 
 Thank you for your answer, but it was ineffective since I am also against reactions. 
 I liked this note 
 I liked this note. 
 I liked this note 
 I liked this note more. 
 Really? Or are you just trying to make a point? 
 I second that, that's how we design event-based systems in computer science like "event sourcing" databases, or git commits ("commit" is a verb) or commutative CRDTs (CmRDTs).

"Verbs not nouns" isn't bravado, it's a good design philosophy to have, a rule of thumb to check whether you're doing it right. 
 I think you didn't think about the problem for even a minute and you're just repeating some arcane academic knowledge that doesn't apply. 
 it's also the same principle that indicates that OOP is nonsense because it's based on nouns instead of verbs, and why functions as values makes so much sense... and then interfaces, as opposed to object hierarchies, and they are far more flexible due to composition, again, every element there is an action not a thing 
 Spot on. That's why functional programming is such a huge paradigm shift.

Since I found out that apps like @simplex are written in Haskell I wouldn't touch anything else. 
 i'm a #golang maxi myself, no other language has coroutines in the base syntax and no other language is as readable nor compiles as fast

you can program in a functional style in go but it can make performance bad because you dump so much garbage memory when you do everything pass by value return by value and no mutability

functional doesn't really even say anything about concurrency and when you mix concurrent and functional together it's a terrible mess of memory waste and hellish STW GC pauses

on teh other hand, remembering to lock and unlock mutexes is a pain in teh ass too

it's really the thing that makes #golang shine is you can use almost every style of architecture and execution except for object oriented, it instead has interfaces and anonymous struct members which allows for concise composition of composite data types and accessors to mutate them

go is much maligned because it demands a kind of discipline that most other languages don't teach you, for example as i work through forking fiatjaf and mattn's codebases they excessively use "generics" and interfaces, which they clearly don't understand, and write code that ends up being slower and more long winded than a static typed version

i can tell that fiatjaf, at least, has a javascript/python background because he seems to loathe dealing with types properly, almost none of his interface using code has any type assertion safety, which is partly why i have to do the work to fork it, i have had code like this bomb out on edge cases many times before and i'm just not interested in delivering trash like that

in Go you can't just say "unpack this interface into this type" and pray that it is ok because it stops the whole runtime 
 Haskell has a copying garbage collector for that reason. You can't use the wrong tool for the job and then complain it doesn't work well. 
 that doesn't change how much memory it takes to compile and the relative amount of memory it needs to do the same task

i personally believe that as a programmer if you don't have a grasp of the architecture of the hardware then you can't write good code for it

i don't like the architecture of the intel/amd type of processor, the motorola is a much more generous design with more and more logically designed fast short term memory storage elements

if your code doesn't respect the limitations of the hardware it will develop mysterious problems that seem perfect in the language but are not fit for the hardware

to me that is the centre of the Go philosophy - humans learn how to write the code for the machine, which is the polar opposite of the C++ philosophy which is the machine interprets the code, usually very imperfectly, and thus slowly, to finally get a decent performance

functional is nice but it would only work really well on a simplified architecture with very large register files, like the hardware actually is now, but not how it actually uses it, it doesn't let you actively choose where your data is stored, and thus if you blow its internal processing budgets of memory in one cache you are punished with a big memory copy or access and a lesser performance

i'm sure there is many ways to improve computer programming but any that disrespect the hardware's limitations is punished by slow compilation and higher runtime memory utilization which leads to less performance, and more chances of bugs slipping through due to a long edit/test cycle

no single model fits the hardware because the hardware is highly heteregonous in its design in order to - probably - fit all these divergent philosophies of program organisation 
 If you are talking about fiatjaf/relayer, which I maintain, I expect that relayer can change its behaviour depending on whether it implements a method or not. If you want that functionality, just add a method. This is not a feature provided by generics. 
 no, i've been working with algia, no need for me to use several and i prefer the completeness of khatru and its relative modularity 
 i would just like to tell you that at least on the algia codebase, you don't break things into small pieces enough, some of the functions are extremely long winded and repetitive and several different slight variations in many cases to do the exact same thing, it's not easy to follow or change 
 I understand that and I will have to refactor it you said. I just don't have the enough time for it at the moment. 
 FYI, patches are welcome. 
 of course, and i would but i'm also retargeting it to a refactored version of go-nostr so it isn't really possible, i have got several parts of it fully working now but others i don't understand so well or need yet so ... anyhow... i appreciate your manners, i hope you don't mind my simpleness 
 Replaceable events are hard to deal with. But I don't know of an alternative that isn't worse. 
 hm. long form event announcement link. that makes alot of sense.