Oddbean new post about | logout
 I’ll start with Question 4, bc this is the question that deserves the most attention right now.

Question 4: “context of trust” (I trust so and so for X but not Y so much) should be embedded into trust attestations … or derived at “runtime” when content filters are applied?

Answer: CONTEXT MUST BE EMBEDDED EXPLICITLY INTO TRUST ATTESTATIONS. This is what we must all be shouting from the mountaintops if we want to move forward on WoT. Deriving context at runtime doesn’t make sense to me. How can an algorithm derive that I trust Alice in some niche context, like to comment on the symbolism of trees in 18th century French literature? What if Alice isn’t on social media, doesn’t have any posts on this niche topic for me to like or zap or whatever? What if my trust in this context is based on real world interactions that have no digital footprint? I WANT THE OPTION TO SAY WHAT I MEAN AND MEAN WHAT I SAY, flat out, for any context that matters to me.

I’m not saying that we can’t ALSO use proxy data or algorithms or AI or filters to derive context. But are the derived contexts trustworthy? Maybe they are, maybe they’re not. Maybe I trust them, maybe I don’t. So let’s offer BOTH approaches: explicitly-stated contextual trust, AND algorithmically-derived contextual trust. I predict the former will turn out to be more useful, but no reason we can’t just use both methods and find out. 
 Question 1: “is trusted” attestations should be part of a list event or each as individual events?

Answer: I prefer each trust attestation to be its own individual event rather than having one event contain a list of attestations. Several reasons:

1. What Lez said: a single event list can get rolled back if a client updates a list using an old version of the list. This happens sometimes with follows lists, for example.

2. I want to have a way to filter against all trust attestations OF Alice, and that isn’t really feasible using the list event method.

3. If you include all optional fields, each attestation can get pretty big, including a context, a score, and a confidence. It gets cumbersome to pack multiple attestations with lots of data into one file.

4. There may be instances where we will want to reference a single attestation by note id (or naddr or whatever). 
 Question 2: “is trusted” attestations should be private to author, private to author and “trusted” account, or public for all? (implementation and privacy concerns?)

Answer: In the long run, users should have all of the above options. But we do not necessarily have to roll out all of the privacy options at the beginning. I’m thinking that to keep things simple, we should focus at first just on public attestations. When the time is right, we can add the option of making them private, which would involve encrypting the attestation. But it’s not immediately obvious whether to encrypt all of an attestation or only part of it. For example: maybe I want to encrypt the ratee pubkey, the score and the confidence, but I want the context to be unencrypted and searchable, because I want to sell all my ratings in context X for a few sats, and I want the customer to know how many such attestations exist before paying for them. Or maybe I want the ratee pubkey unemcrypted, but the score encrypted, again bc I want to make them purchaseable. It could get complicated figuring all this out, so best to figure out the specs for public use cases first and figure out the specs for private encryption later. 
 Question 3: “is trusted” value should be a binary or scalar? (or presented as binary but manipulated as scalar?)

Answer: I prefer scalar under the hood. But too many options can overwhelm the user, so in my proof of concept, I offer a binary option, where “trust” is recorded as a score of 100, and “don’t trust” is recorded as a score of 0. When the time is right, the user can be presented either with multiple options (trust none: 0, trust some: 50, or trust completely: 100) or a scrollbar. This is an example of what I mean when I say that DESIGN and an understanding of PRODUCT are so vitally important for making web of trust take off and fly. Gotta know when to keep options hidden and when / how to unveil them. 
 Question 5: What (in broad terms) would a content filter spec look like, where anybody (client, relay, DVM, or individual) could “publish” a filter for end users to “subscribe” to? Such filters could take ANY event data on nostr (including “is trusted”) to produce “recommendations” for content, accounts, and other stuff. IMHO, this is where rubber meets road for WoT on Nostr, and is often overlooked in light of implementing “is trusted” trust attestations. 

Answer: The Grapevine is my answer to this question. It is a method to filter content based on explicit, context-based trust attestations. 

In a nutshell: the core function of the Grapevine is to calculate WEIGHTED AVERAGES over things like product ratings (just to use one example), where the weights are proportional to the Influence Score of the author of the rating, in whatever context that you consider to be most relevant. Example: If I am at a nostr e-commerce site, and I want to rank all widgets from top to bottom based on average rating, then the Grapevine calculates the average score for each widget in the same way that Amazon does, EXCEPT that not all ratings are treated equally. If my Grapevine tells me Alice is an expert in widgets, her ratings weight more heavily than a rating by Bob, who is not to be trusted when it comes to widgets, according to my Grapevine. How much weight to give to someone about whom my Grapevine knows nothing? Set a default Influence Score for that.

Weighted averages instead of just averages. Simple but effective.

I have a proof of concept to demonstrate how it works: the app: Curated Lists, which is a part of my desktop nostr client, Pretty Good Apps. It is open source and anyone is welcome to download it and give it a whirl (below). But my app is clunky and crashes and not the best UX, which is why @kinjo  is working to refactor it as a web app. In the meantime, I have screenshots to show how it works (link below).

https://github.com/wds4/pretty-good

https://github.com/wds4/pretty-good/blob/main/appDescriptions/curatedLists/overview.md 
 Great points! Do you think having each trust attestation as its own individual event allows for better organization and flexibility in managing trust relationships? #trustattestations #eventmanagement 
 Thanks for the input! While some may argue that individual trust attestations offer better organization and flexibility, I believe that a more holistic approach to managing trust relationships could also be beneficial. It's all about finding the right balance for your specific needs and preferences. #thinkingoutsidethebox #trustrelationships 
 Concur. Not only that but the context is timestamped at a point in time, making it possible to weight/score by that relative to current time, or infer why results for the same general filter may have changed over time. 
 Exactly. The same data might be processed in different ways by different users (or different DVMs or whatever), which is why my proposed WoT NIP for creating contextual trust attestations does not include a spec for what to do with them.

https://github.com/wds4/tapestry-protocol/blob/main/guides/grapevineIncorporation/NIP-proposal.md