gotcha, thanks for the explanation. ill keep an eye out on this release cause ill be stoked to fix this somehow 😎
BTW, I don't think we are ever going to fix this completely until we get something like https://github.com/nostr-protocol/nips/pull/1434 with that number in it. Amethyst is already chunking stuff, but there are always new things we forget to chunck or that chunking becomes too complex. Cat and mouse game.
not sure if you considered this but running COUNT queries over windows of time that are short enough should let you estimate good limit boundaries in those time periods it reminds me that i currently still just run a query which decodes the events in response COUNT... when in fact, the search first makes a set of indexes and then searches for them and returns the list of derived keys for the actual events. this number would be the answer. you could even do bisection searches so at first you only query half each side of your window and then break it down from there to find the smallest window size that works and then you can paginate based on that (ie, it will never exceed a limit once you have that, unless somehow a lot of backdated events get injected) pagination is always a two step even with more "sophisticated" database query systems anyway
It might be that the only solution is to get a new Tor exit node and split the query into separate connections... I don't know... My issue right now is that because of the number of filter limits (12 or so) + numberof subscription limits (10 or so), we bundled everything the app does in 10 subs and those subs are getting really large that are now breaking the req size limit....
I would gladly use more subs, at least that's a setting in strfry that can be set.. I currently have it set to 80 per connection.
I guess the question to relay operators is: what's heavier to process? 1. Lots of filters in one sub 2. Lots of subs, with one filter. 3. Lots of connections with one sub, one filter. Because that's the dance clients do ALL THE TIME.
From a client perspective, we would choose 100s of subs with a few filters each in one connection.
one filter per sub... do you really need more than 80? khatru and relayer don't even have a limit for this (probably because Go can handle that shit anyway, supposedly)
I have never broken down into many subs, but if I wanted to maximize the use of bandwidth and use EOSEs to the full extend they can offer, I bet it would be close to 120 subs, simply because now we are at the limit with 12 filters * 10 subs.
Not that this is an average number. This is for Amethyst and we load everything all the time. So... Not really your average Nostr app.
yeah, amethyst is special it's my new way to discover more bugs in my code https://cdn.satellite.earth/7da12f5e84f9d7b7b33eb124fe460f3ca3290794b4d9514fdbe925922f5b54d8.png
Should extract a testing library? :)
how would the amethyst logic be different for 80 vs 20 subs? i guess, you want to know if subs will scale and how far (as do I). the only way to know is raise the settings and test. leave it at 20 for the relays that dont advertise the nip11 setting? wine: 50 nostr1: 80 eden: 20 nos: unavail damus:unavail primal:unavail .. ya theyre mostly unavailable.. 😭
Better code quality, basically. With each subscription managed separately, we could time them better (sending most important stuff first, relay starts processing then send the rest), we could store the EOSE in the sub object itself instead of one EOSE for all queries involving a user and then having to find the minimum EOSE when more than one user is put together in the same call, etc..
im limited out again, was working for a day or so with 925 follows.. now the error is back. ill ask doug and try to find the performance (or other) implications of the two options. 1) increase strfry hardcoded buffer limit, 2) increase subs (already done but nobody tries it except probably scrapers).
It would be nice to raise (10x) the buffer limit on the default strfry setting. I feel like lots of people are running into this issue with lots of different relays out there.
yep, i suspect it makes nostr feel artificially ded. still unsure why im hitting the ceiling and where the sweet spot is that i need to decrease my follows to for it to work. does amethyst fire off req queries for ALL lists, even when theyre not selected in the ui by chance? ill let u know what i find out about the buffer vs subs..
It will fire for the feed selected up top. If it is "All Follows" that means all pubkeys + t tags + g tags + communities + chats you might have "joined" it is in your kind 3. Here is the code that assembles the Home feed REQ: https://github.com/vitorpamplona/amethyst/blob/main/amethyst/src/main/java/com/vitorpamplona/amethyst/service/NostrHomeDataSource.kt
i know, this would be nice and will happen eventually.. nip11 no love tho?. we barely use it 😂 im looking at some now, there is a max_filters: and apparently i set it to 256. this is informational only, i know the relay goes to ~1000 and will adjust 😎 but y no one use this?
That's my issue with nip-11... It's all informational. So most relays have it wrong. We can't rely on it to make decisions. We need to turn this into core protocol features to get the to be widely used and pushing the relay operator (by having less queries) if they are not in sync.
so if relays change one little tag, and any client actually uses it.. thats how these things get adopted right? its in the spec for nip11 already.. the thing is, even once i add dynamic limits nip, it will still be these informational values just transmitted different.
Yeah, if they just copy paste it, then it's the same thing. But my hope was to make clients always comply with limits for their own benefit and because of that if the relay's limits are wrong the client will use less of that relay, which will make things disappear for users pissing off a lot of people and hopefully fixing the limits of the relay.