Oh, you are just repeating what that Pubky blog post says about Nostr? That was written by a person that is completely ignorant about all the things. Notice how immediately after they say point-blank that Bluesky has decentralized identity. Maybe it was written by ChatGPT?
I asked for your opinion precisely because I am completely ignorant of the subject and yes I only reported what was written in the article. So what is your opinion about Pubky? Can it be a resource for Nostr? Or is it a better alternative? Thank you if you have time/want to answer.
I haven't looked too deeply into it yet, so I may be talking complete bullshit here, but so far my impression is that Pubky is 3 things: 1. signed entries published on a DHT that associate a pubkey with an HTTP server 2. HTTP servers that can host any file 3. a superstructure for reading content from these HTTP servers and turning them into a global social network It's a very elegant structure that sound very compelling to me, but ultimately I don't see how it improves much upon anything Nostr has, and it has significant downsides and unsolved (hidden) problems that Nostr either solves or is trying to solve right now. 2 is cool, but not a very hard problem to solve once you have a way to find these user servers (and, also importantly, someone to host these servers mostly for free). Blossom is doing a similar job with files as first-class citizens. 2 is also not very useful by itself. To make a social network you need a way to efficiently pull content from user servers and display them to users. There is where they came up with 3, which sounds very similar to Bluesky's central big server which they call "Relay". It's a centralized system that cannot possibly become decentralized. It looks like Pubky has accepted that as the only way to do things, and they seem to be planning on hosting one such big server. 1 is trying to be the most decentralized, censorship-resistant system ever for putting out information about public keys -- and we may discuss if it achieves that or not (I am personally very skeptical that DHTs can scale, even though nostr:npub1jvxvaufrwtwj79s90n79fuxmm9pntk94rd8zwderdvqv4dcclnvs9s7yqzis going to boldly claim that this is not a topic worth discussing because "Mainline has already proven itself with its bazillion nodes and centuries of existence" truth remains that Torrents do not work without trackers, and no one knows what will happen with the DHT if it has to store billions of records from people all over the world -- https://newsletter.squishy.computer/p/natures-many-attempts-to-evolve-a is one scenario), but all of this mega-decentralization is completely useless if you don't have a decentralized way to load content from people you follow and have to rely on a giant central server hosted by one big corporation. Pubky's idea seems to be that centralization on content distribution is unavoidable, so they aren't even trying. The idea of Nostr is that such thing isn't unavoidable, so we are trying.
Thank you so much friend, I knew I could count on your experience 🫂🎨
most of this is ture, except that DHT skepticism is as usual very lazy. Saying that torrent doesn't work without Trackers (that mostly control peers quality and quotas) is wrong (of course they do) but also irrelevant, the DHT's job is to find you peers (or in the case of Pkarr mutable arbitrary data), and Mainline does this job perfectly, as in faster and more reliable than you have any right to expect. As for "what happens when Mainline hosts billions of records" ... Nothing, each node only hosts the maximum it configured and only responds to as many requests it can handle or just crash and churn away. The same will apply to Nostr relays if you want them to host the profile metadata of billions of users. So you tell me, what is more likely to gracefully cope with scale? billions of small nodes that are running without their owners noticing anything. with automatic sharing and redundancy and routing table healing. OR handful of expensive to run servers? To conclude: In theory, and in practice Mainline is orders of magnitude more capable to do this specific job than Nost relays. If anything needs to face Skepticism it is the newer smaller more expensive to run network of servers. It is amazing to me, that you think decentralizing search is doable, but a DHT can't scale. That being said, while there will never be decentralised censorship resistant search, there will always be permissionless search and discovery, because just like the web, anyone can find homeservers and crawl them and run their own Indexer. Don't confuse my pessimism towards decentrlaizing search, with whether or not others will be able to permissionlessly index whatever they want. Of course they can, and nothing in hell we can do to stop them. Finally, yes homeservers are not too hard, but they are a bit better than Blossom, because they are more like S3 API, with filenames and list pagination which makes them useful for more apps, as a key value database, not just blob storage. Other than insisting to pretend that DHTs won't scale (when they already do) I love that you actually understand the layers pretty clearly. nostr:note1qqqtsufjm9mfmz4nzjvn2nej78q68jcl0eqmgz3zen8cnw3rez3qekdh9z
I will add another dimension to this. Whatever is possible in Nostr is possible in Pubky. Not everything possible in Pubky is possible in Nostr. Whatever schemes you deploy to achieve the features of Pubky won't be simpler or as resistant, but we could easily add signed event-based data and ofc we will have tools for syncing and mirroring. Just change Nostr to PKARR, then we can fight about the data format later, man!
yes the "bazillion nodes and centuries of existence" argument comes across as unconvincing and a bit insincere to me, sure DHT has existed for long, and worked reasonably well for torrents, but it doesn't necessarily mean it will for this specific use case a bit like using the bitcoin block chain for something new and then saying "it's been proven to work since 2009"
No, we are saying Mainline has this exact capacity, and we are using this capacity in compliance with BEP0044. And furthermore we are saying how our usage is very respectful of the DHT and not adding too much unnecessary traffic to it. And finally we are saying: there are no other networks that have the same capacity, and you can't just manifest millions of nodes with such track record of surviving attacks by states and corporations. So if you refuse to use this network, you have to justify that by showing the better alternative. And in this case (aiming for censorship resistance), better is synonym with bigger. If we were insincere, we would have invented a new DHT with fancy bells wland whistles and tried to take all the credit. Instead we are saying: Use the amazing miracle that we inherited for free. nostr:note1gt4xxvtrdjldwqahdtey0wspzdcnq5eg4xpuda43lhva3n9nphfqcjfwxs
npubs can be self-generated and used with little or no dependency on infrastructure, such as relays. That’s the superpower of #nostr
Yes indeed public keys can be generated by anyone anytime without any infrastructure dependency. Applies to Npubs and Pkarr, because the former is Secp and the later is just Ed25519
But you can have both, by each signing the other, and infrastructure for signing is mature in #nostr and #pubky. So you just need something like NIP-05 for pubky.
Also the people at pubky are really sensitive and kinda sussy nostr:nevent1qqsgpetzdp4wtrx4trm6ew9pwphxq075mmwfyepsn2gg3fy9wf9fuzqpzemhxue69uhkummnw3ex2mrfw3jhxtn0wfnj7q3qarkn0xxxll4llgy9qxkrncn3vc4l69s0dz8ef3zadykcwe7ax3dqxpqqqqqqzynla67
Looks pretty much the same as https://zeronet.io ?
Nope, that is short names. This is pubkeys. Imagine every user had uncensorable DNS for their npub. For free. You dont need an auction because pubkeys are unique to the user, and the user can prove they own it. Imagine you could click on a user npub in your client and it will take you to their own part of the internet they control (opt-in) could be anywhere, including Tor. A place where they can express themselves and excercise free speech, on top of nostr. And there's no way to censor it. It's much more like dnstr, zerobit is like namecoin which is harder problem. https://dnstr.org/
zeronet has keys… you control your site that way
Havent ready it all. But how do you get a short name, what's the tie-breaker if two keys want one short name. I think pubky is very sensibly only tackling one thing at a time. And how is the DNS done? Is it censorship resistant?
It's a blockchain trying to enforce a single namespace... but okay, nostr can use the vast zeronet network and blockchain tech instead i guess 🙃
Unrelated, but are you two brothers?
Independent research confirms that John and Melvin are not related. They just happen to share lastname.
It uses BitTorrent, site updates are done using keys. I haven’t looked into it too much, but just sounded similar
Got it: > ZeroNet will then use the BitTorrent network to find peers that are seeding the site and will download the site content (HTML, CSS, JS...) from these peers. But that is the whole site, not the DNS record, so probably wont work too well. Nice effort tho, thanks for the share!
Will, one thing you will love about this is: I said it's "free", but actually not quite, nothing is free. But I have analyzed the cost of sumbitting a DNS record, and it comes down to about 1 micro sat. Imagine doing something pretty useful for 1 microsat, and how granular an economy you can make. Then we start to price everything in microsats, get a full ultra micro economy gong where the units do something useful.
Pkarr is not free, in the same way sending few hundred udp packets is not free. It definitely costs something! But joking aside, running Mainline nodes costs something too, mostly your time to open a udp port or setup a VPS, we are just amazingly fortunate that Bittorrent provided enough incentive to do it other than our idealistic goal of sovereign identities. All hail media piracy.
I qualified that in my next post. I think it could be about 1 microsat. ie 1/1,000,000 of a satohi
The point isn't control. Having short names complicates things and makes it much harder to be permissionless and censorship resistance. Because it requires some mechanism to decide who owns what, and that boils down to either central authority, or a consortium, or miners of a block chain. So, no, not only is Pkarr not like zeronet, we don't even encourage vanity addresses, we want keys to be like phone numbers, you own it, people alias it, and everyone is sovereign and happy.
@Nuh 🔻 can you hypothesize a potential scalable and decentralized solution to short names or vanity addresses? icann is dumb and must go, but its a step backwards to have to relay a 56 character string as opposed to a 4-10 memorable one.
How many website domain names do you really know from memory? With the recent inflation of TLDs we can never be sure if a website is .com, .org, .io, .net, .ninja, .social, .pub, .app, .sh, .xyz and dozens of others.
Personally, I do know the addresses I visit more often and, anyway, recognition is easier than recall.
I used an impartial bot to analyze the note above for propagandistic content. See table below. While there’s a good point about the increase in TLDs, global names clearly have utility, as shown by the fact that people are willing to pay for them. A system that provides a level playing field and avoids large price increases (like those seen with ENS) would be ideal. One possible approach is to let users choose their short names, with a tie-breaking method if multiple users want the same name. Bitcoin's UTXO model could be well-suited for this purpose, though Namecoin faced challenges with implementation. https://image.nostr.build/559492aca9aa4e481912909f5996165595499136968630958ed2ab1ae1ea9583.jpg
Why does the judgment of "strength" made by a bot matter in any way whatsoever?
Appreciate the perspective! The strength ratings are meant as a guide to highlight key points in the analysis, but they're not absolute. The goal is simply to give structure to the insights, which anyone can interpret as they see fit.
Dozens, if not hundreds. I am sure an average person remembers or can easily come up with at the very least 5 domain names for products and services he often uses just by adding a .com to the companies name. Also if .com is not the extension of the site you are looking for you can quite easily substitute it to other TLDs that make sense in your context and with high probability get to the website you are looking for.
Not only can I _not_ think of a solution that is better than ICANN in any meaningful way, I would go all the way to claiming that it is absolutely impossible to make anything better than ICANN. The only exception is a Web of Trust thing / petname system. But that is not comparable to ICANN since it is subjective, not globally unique names. And I agree with Fiatjaf, human memorable names have inflated value. Watch how often do you write addresses or twitter handles from memory, vs. writing few letters and waiting for autocomplete from your bookmarks or browsing history or social graph etc.
There is no need for globally unique names, IMO. We have them just because of the market economics when bootstrapping the Web.
I would like to believe that, but there are some usecases where they are useful. An example would be advertisement over audio. I can't plug a product identified by 52 characters. And these situations will happen all the time, there won't always be a way to share a url or qr. Another reason to use ICANN is for organisations. You can't sell your company's public key, but the domain is an automatic property of the new owner. I think even if we only use keys, we will reinvent registrars for these use cases.
You can do vanity npubs for corps in a FROST design where they can swap keys as needed without changing the npub. Key discovery/completion goes through the user's WOT graph.
In my understanding, FROST can't help you here, because the new owner has no way to confirm the previous owner deleted their key shares. I could be wrong, would love to be surprised. Ed25519 too can do FROST so that will be good news.
They don't need to delete. You just rotate the polynomial to a position the leaked key is not part of the polynomial anymore.
OK but can't the old shares still sign things for the same public key? Maybe I am missing something.
Signers have to agree on a polynomial to sign. My understanding is that once the leaked key signs with the wrong polynomial, the other signers can just reject that share.
I need to read more. But my intuition says, the old owner already had all shares necessary to generate a full valid signature, so that is impossible to verifiable lose. The only scenario that makes sense to me, is if the company from the start setup the key shares with a trusted 3rd party that assures the new owner that the previous owner doesn't have enough shares to sign on their own. maybe that is what you meant all along.
Yeah, but even more fundamentally you also dont know if the private key that is at the basis of the multisig exists somewhere. Transfer of ownership requires a record one way or another, and so we are back to all the ledger shannigans we are all too familiar with. I agree ICANN can't be beaten when it comes to this stuff; this means the problem has no 'solution', mere mitigation with trade-offs one way or another. Hence i am so bored and tired of thinking about this, and just grugbrain myself behind Nostr; because ultimately what we need is 'sort of good enough'+momentum=succes. I believe Nostr is sort of good enough and has momentum. Congratulations on building the Nostr's Ethereum
namespaced directories can be server via other more trusted entities, but there is no need for a monopoly on it (WoT)
no need for global namespaces where we're going 😅
Could you elaborate?
it was just an off the cuff remark really, don't read too much into it—but if the internet falls apart / balkanizes there wouldn't be such coordination, and not really a need to be able to address everything because you can't reach everything same with space travel and light-speed delays and relativistic communication
people practically don't use domain names anymore. it's time to let that legacy go nostr:nevent1qqsym7d0cjad8ywj834xna7gwfnppwhleykad5gnugu2a3p6nd5743spzamhxue69uhhyetvv9ujumn0wd68ytnzv9hxgtczyprqcf0xst760qet2tglytfay2e3wmvh9asdehpjztkceyh0s5r9cqcyqqqqqqghjeh8f
That one dude sitting on https://huaktuah.com 😂
Just use bitcoin as the tie-breaker.
I checked this. Zeronet while interesting, it does NOT use BEP0044, in fact it does NOT use any DHT at all. It does use Bittorrent trackers, not sure for what purpose exactly but at this point the exact use is irrelevant; trackers are only marginally better than Nostr relays if at all better. nostr:note1yf5450hssmuesxxzph9v5dhxwyjw7hdyvn5x0l3dv9tspzmele8s40nes0
I tried downloading, installing and using Zeronet... Nothing ever loaded, it just kept saying something about announcing or trackers or something like that...
> Pubky's idea seems to be that centralization content distribution is unavoidable, so they aren't even trying. The idea of Nostr is that such thing isn't unavoidable, so we are trying. This is a fair assessment, though some nuances are worth highlighting. Firstly, indexer centralization primarily becomes necessary in Pubky if your application requires a comprehensive, network-wide view of all homeservers—this is in fact something essential for pubky.app’s social functionalities. Features like search, semantic social graph inferences, and others inherently demand centralization due to the resource intensity of crawling the entire Pubky ecosystem, much like Google indexing the internet. I'm not uptodate on Nostr developments, but I believe it might face similar challenges in this regard, although I may stand corrected. Importantly, an indexer in Pubky doesn’t necessarily need to handle content distribution; it only needs to guide users to content locations. The verification of content provenance still happens at the homeserver level. Indexers cloning data and serving directly, however, can enhance user experience by improving responsiveness, and I anticipate the emergence of both lightweight and full-content indexers. We are building Pubky Nexus, a full-content indexer, but it can be strip down to become lightweight as well. We envision multiple competing indexers evolving, akin to the variety seen in web search engines today, despite Google’s dominance. While fully decentralized content distribution may have limitations, I envision (and want to dedicate effort to it) niche users with sufficient resources and interests could potentially run their own indexers, though they naturally only index a partial view of the network. For what is worth, I would like to run one at home.
If you have questions about how Pubky will deal with search and feeds and social feature, send them this way; nostr:note1u06ftez743eps5mrrtrwqnvzga0la6c7c4enzzn57mck4lmfmpdsd3h2qf
nostr:nprofile1qqsyvrp9u6p0mfur9dfdru3d853tx9mdjuhkphxuxgfwmryja7zsvhqppemhxue69uhkummn9ekx7mp0qythwumn8ghj7anfw3hhytnwdaehgu339e3k7mf0qyghwumn8ghj7mn0wd68ytnhd9hx2tch2deau any chance of getting collapsible threads in comments? Reddit may have turned into a censorious hell hole, but their original comment thread ux was the pinnacle of web design. It's such a shame that almost no one else has copied it.
Long time coming. We just haven't been able to work on it yet.
Understandable. Thanks for all the work you have done so far
An impartial bot analyzed the contents of this note for propaganda content. https://image.nostr.build/58a520ef191a8d6ed948094c6fbd5066e4503931509e0f3017fcc237cce071f3.jpg