Oddbean new post about | logout
 I mean it won’t work in a browser anyway? Noise definitely the way to go if you’re building something without compat, and this won’t get compat anyway 
 True, it’s not any more compatible if browsers don’t adopt it — perhaps just easier to implement?

I imagine it will be easier for new app developers to integrate basic TLS with key support than setup noise or libp2p (so their apps connect to @HORNETS nostr relays).

Still going to offer libp2p QUIC support for apps that want to go the extra mile (TLS is built-in without CAs). 
 BTW it was defined in https://datatracker.ietf.org/doc/html/rfc7250 (but I think that was TLS 1.2 days), and TLS 1.3 https://datatracker.ietf.org/doc/html/rfc8446 mentions it. 
 TLS is a really bad protocol if your doing something greenfield. Please don’t ever use it unless you’re stuck in the web browser world. 
 Libp2p QUIC with npub is faster than websockets with DHKE noise.

In the case of Nostr, Libp2p QUIC provides better security against MITM attacks… if you know the relay’s npub and can establish an encrypted connection with it. Npub is used as the Libp2p ID.

If you don’t know the key to the nodes you’re connecting to then noise is indeed the way to go — ephemeral key generation — given you can’t use their known npub to stop MITMs. CAs were made to stop MITMs especially — this gives us our own way of doing it, if you have the relay’s key from a trusted source beforehand. 
 I’m incredibly, incredibly skeptical that with the amount of data we’re talking about you can even measure the difference in performance on a LAN, let alone the internet. 
 The point about MITMs is a bit more important than the speed. :-)

Sure, it might be a tiny difference in speed…

QUIC is known to have less round-trips than normal TLS, which means it’s definitely faster than websockets+noise — benchmarking isn’t necessary. https://i.nostr.build/oi5HFTZLzFdvoJoP.gif  
 AFAIR QUIC has the same number of round trips as normal TLS if you set the TCP options right. Basically it shaves off RTs because it begins the TLS handshake in the SYN. You can do that with TCP, too, doubly so if you aren’t using a TLS library that sets socket options for you. The claim in your diagram that you need 0 full RTs to do QUIC setup is nonsense, that’s just if you’ve spoken to the server before and it has cached keys, but the 0 RTT TLS stuff isn’t being implemented in generic HTTP stacks because of replay issues. 
 You could theoretically tailor TCP + TFO + Noise to achieve 1 RTT, but that sounds like a headache to implement. If any pre-made libraries are available with that setup, drop a link! 🔗

While it’s true that QUIC’s 0-RTT mode isn’t widely used due to replay attack risks, libp2p QUIC achieves 1 RTT with encryption, which is still faster than typical WebSockets over TLS (3 RTT).

What’s neat is that libp2p exchanges peer IDs during the QUIC handshake, meaning MITM attacks are mitigated if you’ve already retrieved the relay’s key from a trusted profile.

Why do you dislike QUIC/TLS so much if it’s free of CAs? How does it compare to TCP+TFO+Noise? 
 TLS’ only issues aren’t just CAs being a mess, it’s also an anachronistic protocol that just isn’t how you’d design something today. 1.3 is better, sure, but it carries tons of legacy garbage and most clients still have fallbacks and logic for it.

I also dislike QUIC for being a lazy, poor version of TCP. Middleboxes suck sometimes but sometimes do useful things (eg on a plane TCP is terminated before it goes to the sat, improving latency and throughput compared to UDP things with retransmissions, middleboxes can use MSS clamping to avoid fragmentation, etc). QUIC largely failed to consider these things and just said “screw all middleboxes” (compare to eg tcpinc which got similar encryption properties without being lazy). QUIC exists to rebuild TCP in user-space cause kernels sometimes suck, but for those of us with an operating system newer than five years old that’s not a problem we have. Worse, sometimes your OS has specific useful features (eg MP-TCP) that you don’t want twenty apps to have to rewrite. FFS this is literally the point of having an OS! The only promise QUIC made that isn’t as trivial in TCP is FEC, but they gave up on it cause…I dunno why. 
 Note that QUIC is useful on the web for helping to and the multi-connection/head-of-line blocking problem. But if you aren’t fetching a bunch of resources from the same server across different streams where you can use each resource individually on its own this doesn’t apply (and it requires integration work to make it apply). 
 Errr avoid the multi-connection (and associated initial small window sizes)/head-of-line-blocking tradeoff. 
 We are indeed fetching a bunch of resources from the same server across different streams.

Libp2p allows us to use multiplexing so we can open as many bi-directional streams as we want over a single connection, it's awesome. We use it for Airlock (permission system for the decentralized GitHub).

I understand your point about the OS handling TCP instead of each app handling networking individually, which does make a lot of sense. I wish there were a plug-and-play TCP+TPO+Noise library that could handle multiplexing! Would be a nice addition to include in libp2p. 
 TFO* 
 I mean if you drop the TFO requirement it’s easy - just open many connections. But just fetching many resources isn’t sufficient to want QUIC - you have to be doing so immediately after opening the connection(s), the resources have to be non-trivial in size (think 10s of packets, so the text of a note generally doesn’t qualify) and have a need for not blocking one on another, which is generally not the case on a mobile app - server can send the first three things you need to paint the full window first and then more later. 
 It’s a desktop app for decentralized GitHub on Nostr. The amount of data is non-trivial in size (sometimes). Repos can be large. This is why we’re using merkle tree chunking for large files as well. I want the reduced RTT. 
 It’s just a head-of-line-blocking question, though…I imagine mostly you’re not downloading lots of CSS/JS/images, which is the big head-of-line issue HTTP clients have - they can render the page partially in response to each new item they get off the wire.

I assume you don’t, really, though? You presumably get events back from the server in time-order and populate the page down in time-order, so mostly one event stalling the next won’t make all that much difference in UX?