Oddbean new post about | logout
 bah... changing the event store query function to return the events serially messes with the model of how two tier (layer2) event stores work

for now, ok, going to change it to make the assumption that all events in the L2 are known about, just there is only their index and event ID stored... i think that's ok, but it is an extra overhead to consider in the garbage collector... it kinda implies that the low/high water marks need to be lower to allow space for the size of the L2 indexes

but i guess it's not that huge, i mean... ok, tags can take up a lot of space... but compared to the event as well, and the tags are just truncated 8 bytes out of 32

ah, engineering complex things is complex 
 yeah, simple... just have to make the assumption thet L1 has the index and stub of all events it might find in the L2... probably have to think later about how to deal with one-hit wonders causing lag in client requests but it is a one thing... if a pruned event is fetched, then the user has to wait a little longer for the L2 to retrieve it

on the other hand, this also means that i could make an IPFS or Blossom event store cluster store for events as well, since i'm now only searching the L2 for specific events due to this

gonna need to think on it some more...

it could spawn a background query to the L2 for the same filter anyhow, and then add the events to the local store so next query will hit them, even if the indexes fell out of the headroom space the GC allows for pruned events

i think that is better... still not perfect but L2 queries are going to always cost an extra delay time anyway, and possibly the user will refresh and by that time the event will be freshly replaced in the L1 
 ah yes, and i forgot... if the events that are found, while the query still has a standing subscription (ie, the limit was not reached, or the query was not CLOSEed) the client will receive the later-found events from the L2's background sync query... very often this will happen, and this will mean the Layer1 relay-cache is still delivering the data of the big shared event store in the same essential timeline as if the query results were always returned hot, except the order of the revived events won't be necessarily older than the ones that L1 already found...

it works, anyway. subscription model makes it workable, ultimately the concurrent channel-return model makes assumptions about time that don't need to hold for the pub/sub model... the clients don't really understand it, they just get pushed events tied to a filter that has a subscription id, they don't see the work or care about anything else, it is a channel model for the client anyway