Oddbean new post about | logout
 It might be that the only solution is to get a new Tor exit node and split the query into separate connections... 

I don't know... My issue right now is that because of the number of filter limits (12 or so) + numberof subscription limits (10 or so), we bundled everything the app does in 10 subs and those subs are getting really large that are now breaking the req size limit....  
 I would gladly use more subs, at least that's a setting in strfry that can be set..  I currently have it set to 80 per connection. 
 I guess the question to relay operators is: what's heavier to process? 

1. Lots of filters in one sub
2. Lots of subs, with one filter. 
3. Lots of connections with one sub, one filter.

Because that's the dance clients do ALL THE TIME. 
 From a client perspective, we would choose 100s of subs with a few filters each in one connection. 
 one filter per sub... do you really need more than 80? khatru and relayer don't even have a limit for this (probably because Go can handle that shit anyway, supposedly) 
 I have never broken down into many subs, but if I wanted to maximize the use of bandwidth and use EOSEs to the full extend they can offer, I bet it would be close to 120 subs, simply because now we are at the limit with 12 filters * 10 subs.  
 Not that this is an average number. This is for Amethyst and we load everything all the time. 

So... Not really your average Nostr app. 
 yeah, amethyst is special

it's my new way to discover more bugs in my code

https://cdn.satellite.earth/7da12f5e84f9d7b7b33eb124fe460f3ca3290794b4d9514fdbe925922f5b54d8.png 
 Should extract a testing library? :)  
 nah, it's enough to just add to my relay list and open it up and then when everything explodes, hit the task manager and kill 
 how would the amethyst logic be different for 80 vs 20 subs?  i guess, you want to know if subs will scale and how far (as do I).  the only way to know is raise the settings and test.  leave it at 20 for the relays that dont advertise the nip11 setting?

wine: 50
nostr1: 80
eden: 20
nos: unavail
damus:unavail
primal:unavail

.. ya theyre mostly unavailable.. 😭 
 Better code quality, basically. With each subscription managed separately, we could time them better (sending most important stuff first, relay starts processing then send the rest), we could store the EOSE in the sub object itself instead of one EOSE for all queries involving a user and then having to find the minimum EOSE when more than one user is put together in the same call, etc..  
 im limited out again, was working for a day or so with 925 follows.. now the error is back.

ill ask doug and try to find the performance (or other) implications of the two options.  1) increase strfry hardcoded buffer limit,  2) increase subs (already done but nobody tries it except probably scrapers). 
 It would be nice to raise (10x) the buffer limit on the default strfry setting. I feel like lots of people are running into this issue with lots of different relays out there.  
 yep, i suspect it makes nostr feel artificially ded.  still unsure why im hitting the ceiling and where the sweet spot is that i need to decrease my follows to for it to work.

does amethyst fire off req queries for ALL lists, even when theyre not selected in the ui by chance?

ill let u know what i find out about the buffer vs subs.. 
 It will fire for the feed selected up top. If it is "All Follows" that means all pubkeys + t tags + g tags + communities + chats you might have "joined" it is in your kind 3. 

Here is the code that assembles the Home feed REQ: https://github.com/vitorpamplona/amethyst/blob/main/amethyst/src/main/java/com/vitorpamplona/amethyst/service/NostrHomeDataSource.kt