It's over, billions must simdjson...
billions? hah, that presumes adoption already 😀 to get there, there's probably more worthwhile things for both relay and client authors to focus on than duplicating protocol work
Can nginx and browsers automagically gzip the json?
yea, the websockets protocol has built-in support for transparent per-message compression, using deflate (gzip)
And one message could be an array containing the result of some query, i.e. all posts by people I follow in the last day. That's probably not much slower than going from json to binary and back. But haven't benchmarked. Would be good is there was a relay health tool that checks if this is configurered.
i'm not sure-AFAIK, the protocol sends only one event per message deflate is very good in catching repeated text within a message, compressed JSON will get pretty close to a naively implemented binary protocol, for large data it's also very fast with lots of small messages it will be less effective because the same text repeated over and over won't be compressed, this would ideally need some kind of shared dictionary like HTTP2's header compression
This would be a good nip, result batches to allow better compression ratio
that made sense to me, but apparently, websocket can already do this on the transport level: nostr:nevent1qvzqqqqqqypzqlxr9zsgmke2lhuln0nhhml5eq6gnluhjuscyltz3f2z7v4zglqwqqsqz35fqcqu7qmplf6z7wmyxs7fv84rezvj24kevz5h54ntdhwsj6cyz2c99 so now it's less clear what the win would be
Considering the messages are coming from different sources, doubt that there's much common text between them. It should be easy to do a quick analysis to figure this out.
yes, that would be something to figure out first HTTP2 benefited from decades of HTTP use, and a virtually ossified protocol, making it possible to design a hyper-optimized protocol for specifically that use-case just throwing something together with protobuf is not quite that, and has the risk that another binary optimized protocol will be needed later, and of course people will want support for all three... 😀 (HTTP3 didn't change much compared to HTTP2, the main change is switching the transport to QUIC and pretty much all other changes are related to that, but still-)
strfry already does stream / rolling window compression for clients that support it..
TIL this is a thing at all, it's even defined in the same RFC ( https://datatracker.ietf.org/doc/html/rfc7692 ) as per-message websocket compression, a matter of reusing the LZ77 context between messages