Oddbean new post about | logout
 Yes, but you have an event id that serves as the handle, rather than an `a` tag. Replies would tag the update event, which tags the create event. Since the updates don't replace the create, the create is still accessible, so you can pull all events, or just events for a given revision. No data is being deleted, so clients don't have to guess. Right now, replies that only tag the `e` tag of a given revision get lost when the post is updated. 
 NIP-28 used this approach, which I believe I had a big part in developing there at the time. Now to me it looks ugly, dirty, disgusting, a very bad idea. I still don't get why referencing an initial event is better than using the "d" tag. Both are arbitrary strings ultimately.

About not losing history, again, that's the same point from before: it has costs.

Also if these multiple versions were treated the same way normal events are today it would break the relay query language, as if you wanted to fetch multiple statuses from people, for example, you would end up getting multiple old status for the same person and none from some others that hadn't updated in a while -- and so on. 
 And then again it's not very clear what we're getting from this.

For example, in the "update" event approach the same problems of contact lists remains: one client can overwrite an update event from another client and people lose part of their contact lists.

In the case of "delta" events then you must ensure that you have the full history, which means you must know the exact relays to where a person is publishing their deltas -- but if you are diligent enough to know that and you have successfully written more complex software able to handle that, then why can't you do the same for replaceable events today and fetch the damn last-updated contact list from a relay that you know will always have the last version before replacing it?

I think your suggestion of having replaceable events + delta events (I don't remember the details) could have been a better approach actually, as it would preserve the best aspects of all worlds, but I'm not sure about the implementation complexity of it. 
 Your point about queries getting duplicates which crowd out some desired results is a good one. You could technically send one filter per pubkey with limit 1.

Lists should not have create/update like blog posts, they should instead have set/add/remove, which combines diffs and replaces in a conflict-free way.

From the perspective of event sourcing, projections should be a different layer from events. We have all this weird awkwardness because we have only one layer. I'm planning to work on some basic layer 2s via DVMs in the next month or so. 
 This is just a test

nostr:nevent1qyd8wumn8ghj7urewfsk66ty9enxjct5dfskvtnrdakj7qgmwaehxw309aex2mrp0yh8wetnw3jhymnzw33jucm0d5hsz9thwden5te0wfjkccte9ejxzmt4wvhxjme0qy28wumn8ghj7un9d3shjctzd3jjummjvuhszymhwden5te0wp6hyurvv4cxzeewv4ej7qguwaehxw309a3ksunfwd68q6tvdshxummnw3erztnrdakj7qgnwaehxw309ahkvenrdpskjm3wwp6kytcpr9mhxue69uhhyetvv9ujuumwdae8gtnnda3kjctv9uqzqv339nqqsvqqg3ytqnwevaqx2gx3yyej9hr2sjaus82xd089uy3d8ppn9a 
 mutable events don’t mix well with event sourcing models. I agree with you, projections or materialized views are another layer. I wonder if the caching relay approach primal is using is effectively this second layer. 
 It's one way to do it, but sort of centralized because it assumes you're running a cache for the whole network. I can't imagine it would be easy to compose multiple with coverage over different parts of the network. A better interface that allows you to ad hoc query particular relays, or compose the results from multiple caches is what's needed. 
 I don’t think it requires the cache being global, although that’s the approach primal went with. I’m imaging a second process running near your relay that’s responsible for projecting the current state eg revision of a note. this second layer could expose the same api as a standard nostr relay to make clients just work. you could also suck this second layer logic into your relay implementation depending on your scale/reliability/ops requirements. 
 in a large scale distributed system analogy, new events would hit a queue. one consumer group would write the raw events to storage. another consumer group would update the projected state in the second layer when applicable. 
 Yeah, I think I agree, the query interface just needs to be carefully designed so that multiple caches can be queried simultaneously and reconciled. See the discussion around COUNT.