Oddbean new post about | logout
 Question 2: “is trusted” attestations should be private to author, private to author and “trusted” account, or public for all? (implementation and privacy concerns?)

Answer: In the long run, users should have all of the above options. But we do not necessarily have to roll out all of the privacy options at the beginning. I’m thinking that to keep things simple, we should focus at first just on public attestations. When the time is right, we can add the option of making them private, which would involve encrypting the attestation. But it’s not immediately obvious whether to encrypt all of an attestation or only part of it. For example: maybe I want to encrypt the ratee pubkey, the score and the confidence, but I want the context to be unencrypted and searchable, because I want to sell all my ratings in context X for a few sats, and I want the customer to know how many such attestations exist before paying for them. Or maybe I want the ratee pubkey unemcrypted, but the score encrypted, again bc I want to make them purchaseable. It could get complicated figuring all this out, so best to figure out the specs for public use cases first and figure out the specs for private encryption later. 
 Question 3: “is trusted” value should be a binary or scalar? (or presented as binary but manipulated as scalar?)

Answer: I prefer scalar under the hood. But too many options can overwhelm the user, so in my proof of concept, I offer a binary option, where “trust” is recorded as a score of 100, and “don’t trust” is recorded as a score of 0. When the time is right, the user can be presented either with multiple options (trust none: 0, trust some: 50, or trust completely: 100) or a scrollbar. This is an example of what I mean when I say that DESIGN and an understanding of PRODUCT are so vitally important for making web of trust take off and fly. Gotta know when to keep options hidden and when / how to unveil them. 
 Question 5: What (in broad terms) would a content filter spec look like, where anybody (client, relay, DVM, or individual) could “publish” a filter for end users to “subscribe” to? Such filters could take ANY event data on nostr (including “is trusted”) to produce “recommendations” for content, accounts, and other stuff. IMHO, this is where rubber meets road for WoT on Nostr, and is often overlooked in light of implementing “is trusted” trust attestations. 

Answer: The Grapevine is my answer to this question. It is a method to filter content based on explicit, context-based trust attestations. 

In a nutshell: the core function of the Grapevine is to calculate WEIGHTED AVERAGES over things like product ratings (just to use one example), where the weights are proportional to the Influence Score of the author of the rating, in whatever context that you consider to be most relevant. Example: If I am at a nostr e-commerce site, and I want to rank all widgets from top to bottom based on average rating, then the Grapevine calculates the average score for each widget in the same way that Amazon does, EXCEPT that not all ratings are treated equally. If my Grapevine tells me Alice is an expert in widgets, her ratings weight more heavily than a rating by Bob, who is not to be trusted when it comes to widgets, according to my Grapevine. How much weight to give to someone about whom my Grapevine knows nothing? Set a default Influence Score for that.

Weighted averages instead of just averages. Simple but effective.

I have a proof of concept to demonstrate how it works: the app: Curated Lists, which is a part of my desktop nostr client, Pretty Good Apps. It is open source and anyone is welcome to download it and give it a whirl (below). But my app is clunky and crashes and not the best UX, which is why @kinjo  is working to refactor it as a web app. In the meantime, I have screenshots to show how it works (link below).

https://github.com/wds4/pretty-good

https://github.com/wds4/pretty-good/blob/main/appDescriptions/curatedLists/overview.md