Oddbean new post about | logout
â–² â–¼
 Just installed iroh.  Looks allright.

How does it use #pkarr? 
â–² â–¼
 It doesn't have a DHT, so if you want to connect to a peer there are two options:
1. originally it only used a "ticket" that another peer gives you telling you what relay (think STUN server but better) they are using.

2. use Pkarr to connect to peer by their ed25519 key instead of having to learn their relay directly from them. So if they use another relay, you won't lose connection.  
â–² â–¼
 Iroh dev here.

The core of iroh is p2p QUIC with dial by node id and very good NAT hole punching, so you almost always get direct connections.

We need a global mechanism to publish some information (a relay URL and optional direct addresses) for an ed25519 public key.

We have multiple mechanisms, one of them pkarr.

So far it works really, really well. Both speed and reliability is comparable to DNS, but we don’t have to run infrastructure. It is just nice in terms of operations even if you don’t care about p2p for ideological reasons.

See https://docs.rs/iroh-net/latest/iroh_net/discovery/index.html 
â–² â–¼
 Iroh dev is on Nostr,, if you are considering using Libp2p, or any other Holepunching tech, you should depend on them, you won't find better open source engineering around this problem.

nostr:note1qtctn5htzmvgawt793dph79klpj5zcnrlyx7wvnns968nz9wcehqzcey7d 
â–² â–¼
 For the record, the reason Nostr has Negentropy is because Iroh team popularised the Range-based Set Reconciliation paper in the community.

nostr:note1qtctn5htzmvgawt793dph79klpj5zcnrlyx7wvnns968nz9wcehqzcey7d 
â–² â–¼
 does this check out, nostr:nprofile1qqszrq3cgvfe89vadjrp0gaa3xfs82txpx6y5ezwjuufzqu20h5xytgppemhxue69uhkummn9ekx7mp0qy2hwumn8ghj7un9d3shjtnyv9kh2uewd9hj7qg7waehxw309ahx7um5wgkhqatz9emk2mrvdaexgetj9ehx2ap0f0a7qk? 
â–² â–¼
 he graciously mentioned my name in the acknowledgement section https://logperiodic.com/rbsr.html 
â–² â–¼
 I wouldn't have forwarded this paper, if I didn't learn about it from Iroh's team. 
â–² â–¼
 sorry, I didn't know you were Aloscha meyer 
â–² â–¼
 I am not. I am the one who forwarded the paper when I saw it mentioned by Iroh team.

Aljoscha is a wizard. 
â–² â–¼
 It's enormously complex.  Nostr gained popularity through its simplicity. 
â–² â–¼
 I don't think it is conceptually complex at all. But implementations will have to be complex to be performant.

I don't mind complexity when it is justified, at some point Pubky might add Merkle trees which are at least as complex as Negentropy, but it is the right thing to do if we want Homeservers to sign data and have version control, and have even more efficent sync than Git. 
â–² â–¼
 Both bad ideas.  Git is fine. 
â–² â–¼
 No it is not fine. Git is very bad for key value store sync. If you want to sync all files with specific prefix or within a specific range, you are screwed. 

That doesn't matter when you have 100 rarely changing files, but it does matter when you have 10s of thousands and the list changes very frequently  
â–² â–¼
 It was Steve Jobs that said "increase the simplicity by 10%, and double the adoption".

I'm sure a very complex solution will work (bear in mind solutions are never complex in the mind of the developer that implements it), but the adoption suffers exponentially.

Git is well adopted and that is quite important. 
â–² â–¼
 as you can see, it is going to be optional API, as evident by not already existing.

If you don't want to help lower the cost of crawling Homeservers, you won't need it. But the developers working on our Indexer are already feeling the pain of not having a way to cheaply sync with the homeserver. This is going to get way worse after users are able to migrate between servers, suddenly indexers will find themselves downloading entire user's data twice if they don't have a cheap way to recognise that they already did that work before.

wiring git in the homeserver not only not sufficient for our needs it might be more complex too. 
â–² â–¼
 I think you might be slightly dismissing the complexity here, we've been working on this for 10 years at MIT and with solid.  These are hard problems with many edge cases.  But I dont underestimate  your ability to solve hard problems.  If you get it working well, will use! 
â–² â–¼
 You're defeating your own argument here.  Your thesis that bittorrent mainline is censorship-resistant because it verifiably has millions of nodes and 15 year track record, and something smaller is untested and experimental.

Git also has 15 years track record and millions running the software.  Something new is untested and experimental.

As you keep pointing out we'll never get a chance to bootstrap such networks again.   
â–² â–¼
 Git is not a network though. And I wouldn't bother building an alternative to Git if I never faced a need that can't be satisfied by Git.

So if the need never really came up, or if you then showed me how to satisfy it with just Git, I would do that.

But the argument of networks should not be abused for non network stuff. For example, we gain nothing from interop with Oauth, when we are only authorising our apps to our homeservers, so inventing something simpler and more fit to our need than Oauth is the right decision, and it doesn't matter how old or tested Oauth is.

Specific arguments are not universal dogma, they need context and judgment to apply to other situations.

Regardless, even if we built our own sync system, it doesn't affect anyone who would rather ignore it. Some of their flows might suffer (like continous mirroring and backup), or it might not suffer because they use Git and they are satisfied with that, but as long they stick to Pkarr, HTTP, and the simple PUT,GET, DELETE apis, apps will just work.

We are dedicated to the simplest setup necessary to achieve our goals, but not any more simpler. Worshipping simplicity too far result in sacrificing important features, like proper key management in Nostr. By definition managing Pubky keys is going to be more complex (spec wise) but offering nicer UX, and that is the balance. 
â–² â–¼
 and of course, when things are new and experimental, my tone pushing them on people will be appropriately humble. much more humble than when I talk about Pkarr.

If I don't know that something is rock solid yet, I won't talk as if it is. 
â–² â–¼
 At the end of the day, Indexers will have a certain cost to run, and lowering that cost with complexity is a cost benefit analysis, do you want running Indexers to be cheap(er) and more competitive? and how much would you like to pay for that.

And whatever happens, we will never expose any of this complexity to neither users nor light clients app devs, unless they go out of their way to leverage it.

we haven't started with any of this anyway, but it is important to have plans for what seems as inevitable needs.