Oddbean new post about | logout
 I don't think it is conceptually complex at all. But implementations will have to be complex to be performant.

I don't mind complexity when it is justified, at some point Pubky might add Merkle trees which are at least as complex as Negentropy, but it is the right thing to do if we want Homeservers to sign data and have version control, and have even more efficent sync than Git. 
 Both bad ideas.  Git is fine. 
 No it is not fine. Git is very bad for key value store sync. If you want to sync all files with specific prefix or within a specific range, you are screwed. 

That doesn't matter when you have 100 rarely changing files, but it does matter when you have 10s of thousands and the list changes very frequently  
 It was Steve Jobs that said "increase the simplicity by 10%, and double the adoption".

I'm sure a very complex solution will work (bear in mind solutions are never complex in the mind of the developer that implements it), but the adoption suffers exponentially.

Git is well adopted and that is quite important. 
 as you can see, it is going to be optional API, as evident by not already existing.

If you don't want to help lower the cost of crawling Homeservers, you won't need it. But the developers working on our Indexer are already feeling the pain of not having a way to cheaply sync with the homeserver. This is going to get way worse after users are able to migrate between servers, suddenly indexers will find themselves downloading entire user's data twice if they don't have a cheap way to recognise that they already did that work before.

wiring git in the homeserver not only not sufficient for our needs it might be more complex too. 
 I think you might be slightly dismissing the complexity here, we've been working on this for 10 years at MIT and with solid.  These are hard problems with many edge cases.  But I dont underestimate  your ability to solve hard problems.  If you get it working well, will use! 
 You're defeating your own argument here.  Your thesis that bittorrent mainline is censorship-resistant because it verifiably has millions of nodes and 15 year track record, and something smaller is untested and experimental.

Git also has 15 years track record and millions running the software.  Something new is untested and experimental.

As you keep pointing out we'll never get a chance to bootstrap such networks again.   
 Git is not a network though. And I wouldn't bother building an alternative to Git if I never faced a need that can't be satisfied by Git.

So if the need never really came up, or if you then showed me how to satisfy it with just Git, I would do that.

But the argument of networks should not be abused for non network stuff. For example, we gain nothing from interop with Oauth, when we are only authorising our apps to our homeservers, so inventing something simpler and more fit to our need than Oauth is the right decision, and it doesn't matter how old or tested Oauth is.

Specific arguments are not universal dogma, they need context and judgment to apply to other situations.

Regardless, even if we built our own sync system, it doesn't affect anyone who would rather ignore it. Some of their flows might suffer (like continous mirroring and backup), or it might not suffer because they use Git and they are satisfied with that, but as long they stick to Pkarr, HTTP, and the simple PUT,GET, DELETE apis, apps will just work.

We are dedicated to the simplest setup necessary to achieve our goals, but not any more simpler. Worshipping simplicity too far result in sacrificing important features, like proper key management in Nostr. By definition managing Pubky keys is going to be more complex (spec wise) but offering nicer UX, and that is the balance. 
 and of course, when things are new and experimental, my tone pushing them on people will be appropriately humble. much more humble than when I talk about Pkarr.

If I don't know that something is rock solid yet, I won't talk as if it is. 
 At the end of the day, Indexers will have a certain cost to run, and lowering that cost with complexity is a cost benefit analysis, do you want running Indexers to be cheap(er) and more competitive? and how much would you like to pay for that.

And whatever happens, we will never expose any of this complexity to neither users nor light clients app devs, unless they go out of their way to leverage it.

we haven't started with any of this anyway, but it is important to have plans for what seems as inevitable needs.