ah, this is hilarious.. kinda. why would someone make a req's over and over like this? I suppose it returns all events from all authors since a certain time and then keep requesting newer.. but why the authors part? Is this some kind of attempted DoS on strfry, to make it do extra work vs just making a query with since.. or is there some legit reason, like relays not allowing since queries? (besides keeping nostr weird) that you'd do this?
```
[Ingester 1 ]INFO| [19] dumpInReq: ["REQ","g7pr6p60ypdmzpd3fzbjrwc4i61650",{"since":1721690332,"authors":["00","01","02","03","04","05","06","07","08","09","0a","0b","0c","0d","0e","0f","10","11","12","13","14","15","16","17","18","19","1a","1b","1c","1d","1e","1f","20","21","22","23","24","25","26","27","28","29","2a","2b","2c","2d","2e","2f","30","31","32","33","34","35","36","37","38","39","3a","3b","3c","3d","3e","3f","40","41","42","43","44","45","46","47","48","49","4a","4b","4c","4d","4e","4f","50","51","52","53","54","55","56","57","58","59","5a","5b","5c","5d","5e","5f","60","61","62","63","64","65","66","67","68","69","6a","6b","6c","6d","6e","6f","70","71","72","73","74","75","76","77","78","79","7a","7b","7c","7d","7e","7f","80","81","82","83","84","85","86","87","88","89","8a","8b","8c","8d","8e","8f","90","91","92","93","94","95","96","97","98","99","9a","9b","9c","9d","9e","9f","a0","a1","a2","a3","a4","a5","a6","a7","a8","a9","aa","ab","ac","ad","ae","af","b0","b1","b2","b3","b4","b5","b6","b7","b8","b9","ba","bb","bc","bd","be","bf","c0","c1","c2","c3","c4","c5","c6","c7","c8","c9","ca","cb","cc","cd","ce","cf","d0","d1","d2","d3","d4","d5","d6","d7","d8","d9","da","db","dc","dd","de","df","e0","e1","e2","e3","e4","e5","e6","e7","e8","e9","ea","eb","ec","ed","ee","ef","f0","f1","f2","f3","f4","f5","f6","f7","f8","f9","fa","fb","fc","fd","fe","ff"]}]
Jul 22 19:20:53 strfrybig strfry[3085492]: 2024-07-22 19:20:53.730 ( 198.765s) [ReqWorker 1 ]INFO| [19] REQ='g7pr6p60ypdmzpd3fzbjrwc4i61650' scan=Pubkey indexOnly=1 time=105us saveRestores=0 recsFound=1 work=259
```
🤔
Weird way of requesting all events given not all relays support prefixes.
yeah i mean, i don't think it would get all events anyway, so it does just seem like a bad actor.
It's been out there since forever, someone built a very reliable but dumb crawler
i had a feeling ya. at least it seems like its doing something, vs. just blatant trying to cripple the relay like some of the others (repeated reqs to keep the cache blown out)
You are offering relay hosting so I guess you might be well informed overall ...
Is there a relay already where I can provide metered read access? My dream is a relay where I open the websocket with some e-cash attached and that socket then serves queries until the funds run out, no questions asked. This should be the most DOS secure way of doing things.
With that existing, I would be curious to get involved in providing a big relay.
Curious, what would you meter? Would you base it on events? Bandwidth?
probably # of reqs/subscriptions
Anything that costs real resources. And make it dynamic. So:
* RAM seconds
* CPU seconds
* bandwidth
* HDD MB hours
* ...
With surge pricing during DOS attacks.
The idea is to make the relay pay for itself so the operator can scale it up without fear of increased cost.
My idea of an ideal client-server interaction is that the server commits to a certain price for the next hour and the client sends ecash for the expected use, maybe 5min at a time. And the client should know the user's preferences, so if the price surges, the user gets asked to confirm this or to delay the use of this relay accordingly, with the client handling delays gracefully.
for strfry specifically, it can "log" the performance metrics but this does not easily equate to being able to measure them from external systems unless you were to ingest the logs and turn them into metrics. That being said, you could ingest process metrics, disk metrics etc, and use this. Right now the focus is to use the data currently accessible via the interceptor-proxy such as #REQs/second, #subscriptions and #events, use that to surface the information about how much a particular login/pubkey is 'using' vs other usages, display that to the relay operator so they can make a decision about allowing or blocking. once this is done, then i can think about cost tiers or etc..
an interesting idea, yeah.. i will keep this in mind as im filling out the NIP42 auth stuff.. still very early on that but I've got auth working with strfry now and one-time-pay to read, so in theory it could be done.