Oddbean new post about | logout
 I was aware of this for IPFS, but was curious to see if libp2p was at least somewhat useful 
 "Somewhat useful" is about right.  But then they extrapolate to be a "new internet" etc.  Same people.  The thing I found with this, and so many solutions like it, is that really strange things happen to your network after a while.  It could be a few days, it could be a week, it could be a month.  First time I thought it was my router broken.  But just strange  things happen like DNS stops working, or everything in the house goes slow.  There's so many of these little "paper cuts".  Im skeptical they will all be fixed one day.  After all LANs are set up to stop this.  Which is why websockets are great, they get through every  fire wall.  That's the reason nostr works. 
 they have 3 different handshakes: tls, quic, noise... multiple transports. sigh.. someone should build a version of this that is a bit more opinionated. TURN/STUN is also still really hard, they have never been that reliable. I tried once with libdatachannel with no luck 
 You can choose your handshake settings. If using QUIC it doesn’t use noise by default since QUIC has encryption by default, but if you’re using websockets and have noise enabled then it will use noise. 
 What you’re complaining about is literally its biggest feature…

libp2p gives you the option to select almost any transport and handshake from a config. It’s modular and flexible. Once it’s setup, you have a world of options at your fingertips. 🌎 
 Would be nice to have a version that removed the pubsub options & cleaned up things a bit though. 
 If you used Libp2p and thought this is garbage;

1. you are definitely right.
2. Checkout Iroh guaranteed it will "just work"

disclosure, Iroh uses Pkarr to find nodes by their ID because they don't have a DHT themselves, but their holepunching is impeccable and they are artisans and mind their crafts very well.

nostr:note1g72frpre57ld5dcfxhlr4cpulqcl6dhkvjxr657ly843amek3m8spza2uk 
 Just installed iroh.  Looks allright.

How does it use #pkarr? 
 It doesn't have a DHT, so if you want to connect to a peer there are two options:
1. originally it only used a "ticket" that another peer gives you telling you what relay (think STUN server but better) they are using.

2. use Pkarr to connect to peer by their ed25519 key instead of having to learn their relay directly from them. So if they use another relay, you won't lose connection.  
 Iroh dev here.

The core of iroh is p2p QUIC with dial by node id and very good NAT hole punching, so you almost always get direct connections.

We need a global mechanism to publish some information (a relay URL and optional direct addresses) for an ed25519 public key.

We have multiple mechanisms, one of them pkarr.

So far it works really, really well. Both speed and reliability is comparable to DNS, but we don’t have to run infrastructure. It is just nice in terms of operations even if you don’t care about p2p for ideological reasons.

See https://docs.rs/iroh-net/latest/iroh_net/discovery/index.html 
 Iroh dev is on Nostr,, if you are considering using Libp2p, or any other Holepunching tech, you should depend on them, you won't find better open source engineering around this problem.

nostr:note1qtctn5htzmvgawt793dph79klpj5zcnrlyx7wvnns968nz9wcehqzcey7d 
 For the record, the reason Nostr has Negentropy is because Iroh team popularised the Range-based Set Reconciliation paper in the community.

nostr:note1qtctn5htzmvgawt793dph79klpj5zcnrlyx7wvnns968nz9wcehqzcey7d 
 does this check out, nostr:nprofile1qqszrq3cgvfe89vadjrp0gaa3xfs82txpx6y5ezwjuufzqu20h5xytgppemhxue69uhkummn9ekx7mp0qy2hwumn8ghj7un9d3shjtnyv9kh2uewd9hj7qg7waehxw309ahx7um5wgkhqatz9emk2mrvdaexgetj9ehx2ap0f0a7qk? 
 he graciously mentioned my name in the acknowledgement section https://logperiodic.com/rbsr.html 
 I wouldn't have forwarded this paper, if I didn't learn about it from Iroh's team. 
 sorry, I didn't know you were Aloscha meyer 
 I am not. I am the one who forwarded the paper when I saw it mentioned by Iroh team.

Aljoscha is a wizard. 
 It's enormously complex.  Nostr gained popularity through its simplicity. 
 I don't think it is conceptually complex at all. But implementations will have to be complex to be performant.

I don't mind complexity when it is justified, at some point Pubky might add Merkle trees which are at least as complex as Negentropy, but it is the right thing to do if we want Homeservers to sign data and have version control, and have even more efficent sync than Git. 
 Both bad ideas.  Git is fine. 
 No it is not fine. Git is very bad for key value store sync. If you want to sync all files with specific prefix or within a specific range, you are screwed. 

That doesn't matter when you have 100 rarely changing files, but it does matter when you have 10s of thousands and the list changes very frequently  
 It was Steve Jobs that said "increase the simplicity by 10%, and double the adoption".

I'm sure a very complex solution will work (bear in mind solutions are never complex in the mind of the developer that implements it), but the adoption suffers exponentially.

Git is well adopted and that is quite important. 
 as you can see, it is going to be optional API, as evident by not already existing.

If you don't want to help lower the cost of crawling Homeservers, you won't need it. But the developers working on our Indexer are already feeling the pain of not having a way to cheaply sync with the homeserver. This is going to get way worse after users are able to migrate between servers, suddenly indexers will find themselves downloading entire user's data twice if they don't have a cheap way to recognise that they already did that work before.

wiring git in the homeserver not only not sufficient for our needs it might be more complex too. 
 I think you might be slightly dismissing the complexity here, we've been working on this for 10 years at MIT and with solid.  These are hard problems with many edge cases.  But I dont underestimate  your ability to solve hard problems.  If you get it working well, will use! 
 You're defeating your own argument here.  Your thesis that bittorrent mainline is censorship-resistant because it verifiably has millions of nodes and 15 year track record, and something smaller is untested and experimental.

Git also has 15 years track record and millions running the software.  Something new is untested and experimental.

As you keep pointing out we'll never get a chance to bootstrap such networks again.   
 Git is not a network though. And I wouldn't bother building an alternative to Git if I never faced a need that can't be satisfied by Git.

So if the need never really came up, or if you then showed me how to satisfy it with just Git, I would do that.

But the argument of networks should not be abused for non network stuff. For example, we gain nothing from interop with Oauth, when we are only authorising our apps to our homeservers, so inventing something simpler and more fit to our need than Oauth is the right decision, and it doesn't matter how old or tested Oauth is.

Specific arguments are not universal dogma, they need context and judgment to apply to other situations.

Regardless, even if we built our own sync system, it doesn't affect anyone who would rather ignore it. Some of their flows might suffer (like continous mirroring and backup), or it might not suffer because they use Git and they are satisfied with that, but as long they stick to Pkarr, HTTP, and the simple PUT,GET, DELETE apis, apps will just work.

We are dedicated to the simplest setup necessary to achieve our goals, but not any more simpler. Worshipping simplicity too far result in sacrificing important features, like proper key management in Nostr. By definition managing Pubky keys is going to be more complex (spec wise) but offering nicer UX, and that is the balance. 
 and of course, when things are new and experimental, my tone pushing them on people will be appropriately humble. much more humble than when I talk about Pkarr.

If I don't know that something is rock solid yet, I won't talk as if it is. 
 At the end of the day, Indexers will have a certain cost to run, and lowering that cost with complexity is a cost benefit analysis, do you want running Indexers to be cheap(er) and more competitive? and how much would you like to pay for that.

And whatever happens, we will never expose any of this complexity to neither users nor light clients app devs, unless they go out of their way to leverage it.

we haven't started with any of this anyway, but it is important to have plans for what seems as inevitable needs.