Oddbean new post about | logout
 nm doesn’t seem like that at all. Some kind of decentralized web node thing? 
 Reminds me of zeronet a bit 
 It's dencentralized DNS using bittorrent network. 
 I gave up trying to understand how their stuff actually works. 
 Looks like a copy of https://zeronet.io , that never really took off but was a neat idea 
 I guess there was no killer application for it: People who really needed censorship resistant websites also needed privacy, which bittorrent doesn't offer.  
 You could put an onion address in every torrent file 
 it is easy to think that we are using Bittorrent, since we are using its Mainline DHT, but we are Not.

No p2p storage at all. Just a good old web server, but the censorship resistance come from the fact that you can always point your public key to another hosting provider if you got censored or deplatformed.

In fact, because it is just DNS packets over Mainline, you can use SVCB records to point to mirrors of your data, so even if your main host is taken down, http agents can be smart enough and failover to reading from your configured mirrors. 
 Very interesting. If not using BitTorrent's DHT what is Melvin's talk about millions of nodes referring to? Webservers? DNS servers? 
 These are routing nodes, they have in-memory storage and they only store two things:
1. routing table to nodes closer to a target info hash (basically a hash table automatically sharded)
2. small packets of arbitrary (and other non arbitrary but less relevant to Pkarr) data in an LRU cache.

Nodes churn all the time and even when they don't churn if their cache is full your data might be evicted.

so it is great for censorship resistant routing, not so much for data durability let alone storage of large blobs. 
 Ok so the DHT is the BitTorrent mainnet but it's only used for small signed DNS packets. What are the chances BT clients filter out these as unwanted traffic? 
 BEP0044 is a standard, nodes are free not to support it. It is for arbitrary data, our traffic is not any different from any other traffic, if we abuse the network we will get rate limited like any other spammed.

That being said we are doing our best to avoid abusing the network, by heavily caching packets with large TTL values.

There is a possibility that nodes might filter out packets that look like DNS, but first they don't have incentive for that, we are not hammering them or costing them much, second that will require a software updated distributed to millions of clients, not very likely, in fact networks like this have the opposite problem: legacy.  
 We ARE using the DHT, but we are not using the Torrent aspect.

These are two orthogonal parts of Bittorrent. 

You can use Torrent swarms without DHT (using trackers) and you can use the DHT without using it for connecting to peers and sharing content, you can use it to just advertise small arbitrary packets. which is why Pkarr is possible at all.


you can read more at BEP0044 
 Happy yo answer your questions. 

But the simplest and closest analogy is DNS and WebDav.

Pkarr is a censorship resistant DNS.
Our Homeservers are a nicer WebDav, in many ways, none of which are revolutionary or unfamiliar to web developers.

Some aspects might feel disgusting to Nostr devs, like the fact that data is not signed at client side (not at the protocol level) but we think that is a good thing as it allows for better key management and usage of tried and tested alternatives like good old sessions with cookies.

you can however ignore the homeserver and just put your npub in Pkarr. we just believe that homeservers allows decades of web development to be put to better use.

As you can tell, Pubky core doesn't answer the discovery question, unlike Nostr which starts from discovery then tries to bend relays to be hosting providers with Outbox model.

For discovery we think crawlers and indexers are the normal solution, just like they were for the early web.

But there are plenty of apps that can be built without discovery, and our strategy is to spend the least amount of complexity to achieve the most leverage, before moving to more complex and frankly harder problems in principle (like censorship resistant global search). 
 so there are no feeds, no twitter-like experience? 
 On one hand, we are building something like that using indexers that go out of their way to crawl homeservers and generate feeds.

On the other hand that is not a part of the core protocol so I try to make that distinction super clear.

We expect there will be different indexing for different apps and higher level protocols, and we don't expect that they will all interop, or that indexers can be censorship resistant... kinda impossible to make search censorship resistant, when it is hella expensive and thus only few will ever do it at a large scale.

The web is distributed, search engines are not, and probably never will be. 
 If I understand correctly to use this people will need to either add a DNS server in their OS or use native apps that can do DNS over HTTPS? 
 So far the most usage pattern that makes since is to just query the DHT (or a relay if you are in a browser env) and parse the packet yourself. Which what Pkarr client does

That means you have to then take the parsed info and pass it to your HTTP client after manually changing the URL.

That is the most secure way as well since you get to verify the signature yourself, which won't be available if you query a DNS server that supports Pkarr.

Ideally, browsers (and Curl) will eventually have embedded Pkarr client, but we are far from that.

We are also working on ways that makes Pkarr useful in TLS, and if that bans out we will try to lobby anyone who listens to support that.

So who knows, maybe eventually browsers not only will resolve Pkarr urls, but show a green lock too.