Oddbean new post about | logout
 FORGIVE MY RANT … but I see TrustNet as little more than a glorified popularity contest. A less qualitative (measurable and meaningful) ranking system may actually be more valuable. 

Here are some random thoughts:

On qualifying relationships: 
- people don’t want to rank the “quality” of their relationships. ITS HARD!!
- people especially don’t want to come back AGAIN to update these scores (or even add scores for new friends). It will NEVER happen on a regular basis!! (Srsly?)
- Qualitative scores reflecting the “depth” of  ones relationship (acquaintance, friend, peer, partner) by definition need to be updated as one’s “perception” of the relationship changes. 
- If keeping these qualitative scores updated is required for the success of this trust ranking system, IT WILL FAIL to hold value for its intended purpose. 

On trust as a numeric scale :
- The descrete 100 value scale underpinning this “qualitative” score is not only ridiculous (nobody knows or cares about a 100 value scale) and meaningless (the increments as applied will still be arbitrary) it could also be detrimental for a trust ranking system. 
- Trustworthiness is not a scalar value. Humans don’t have “more” or “less” trust for each other. We either “do” or “do not” trust each other in specific cases. Because of this, ranking on a scale is prone to misinterpretation. 
- What translates well to a scale is popularity. “If my friends trust X (or if X has a bigger voice and reach) then I will give X a higher trust score.” Problem with this is that the value no longer represents individual trustworthiness. 

On quantifiable measures of trust :
- Quantitative measures can be used to determine trust. They don’t ALL have to be algorithmically derived. A mix of Hand reported and computer generated data may work best. 
- Digital identities may have “layers” of trust (distinct from physical interactions) that may be applied “each on their own” (in no particular order) to determine trustworthiness for specific interactions.
- One layer of digital trust may be verification of personhood. For some transactions, a real person is required.
- Another layer of digital trust may be verifying asset ownership. Is this the same entity that “owns” X, Y, or Z known digital assets?
- Another layer of digital trust may be verifying originality.  Does this account pretend to be somebody else and if so is it obviously a spoof? (This may be accomplished best by actual humans)
- Other layers of trust may exist, may be discovered, and may be applicable for web of trust implementations. For this reason, any NIP developed should be open to expansion. 
- Web of trust COULD be determined by discrete “flags” being applied (by humans and by algorithms) to a profile. Each verifies a specific known and measurable quantity. Together they “add up to” an overall “verified” or “trusted” visible mark (one or three different marks?) applied to profiles. TBD. 

We really should be discussing this in earnest (openly, but in a dedicated format). Decentralized WOT implementation will NOT ONLY be a prime differentiator for Nostr from other socials, but will ALSO be essential for Nostr’s success as a social network that is NOT overrun by bots and bad actors. 

Thanks. #rantover 
 Uh @ManiMe, TrustNet is a decentralized subjective WoT system. The numbers only make sense from the perspective of one user towards the network of their contacts.  
 Thanks. With respect, I do understand. As it should be, trust is relative. Rankings in “my” web of trust will be different than rankings in yours. We can talk about how this should be implemented (I’d be honored to be included) but this doesn’t change my base arguments:

1: “quality of relationship” is HARD and (at best) will not be updated by people. Certainly not en mass.

2: “trust as a numeric scale” will likely NOT reflect an individuals “trustworthiness”, and may in fact be misleading if presented as such. 

3: quantifiable (even if some are relative to each user) and non linear (discrete variables that stand on their own) measures of trust can be used to achieve our goal. They might be numerous (and some undefined as yet) but they can be “easily understood” and because of this can be “trusted” by everybody to mean what they promise to mean.

Forgive my random thoughts. Wd love to converse more formally in this topic. How Nostr implements WOT may in fact be its downfall or it’s saving grace. Thank you. 
 @rabble @brugeman 
Here is a simple idea that could incorporate TrustNet and other WOT filter implementations. 

As per my rant above, I believe a “parent NIP” that defines an consistent UX and API (of sorts) for WOT filters (in clients) would be the best choice moving fwd. here’s a “top view” of how that could play out:

nostr:note1za8gapacw9l2r6eqxljpx478r8vgfsd3uxe3qkh7k424sc9nekssyevl7p 
 Great rant, thank you. I agree with most of the problems you're outlining.

First, on layers - I never suggested that we only need one single set of trust assignments. If we need several layers or facets - we could have more.

Second, on updating - manual or automated, any trust signal that is published will have to be updated periodically. UX might make it super smooth, but still. 

Third, on discrete flags - instead of 0-100 scale you're suggesting 0-1 scale, what does that change in principle? If humans are limited - let them only publish 0 and 100, and leave the full range to machines. 

Is there a preview of the NIP you're working on?  
 Thank you. While discussing the social onboarding client we are working on, @I)ruid suggested identity verification as a high priority. He drafted a document describing a signing procedure wherein individuals could cryptographically sign off on the validity of each others’ profile fields, for a PGP style distributed ranking system. We’re working on this. 

As I see, There could be multiple NIPs drafted for nostr’s WOT implementation, all working together under an API defined by a “WOT implementor NIP”.

Given that there MUST be multiple “layers” (some known some yet unknown) to trust and identity verification AND that all clients SHOULD be consistent in their “end user” presentation of trust rankings (a check or something), it makes sense to me that there should be a dedicated “WOT implementor NIP” just describing how this presentation should enfold and an API of sorts for various WOT ranking tools to “plug in” to. 

Each user could then choose the trust ranking tools that make sense to them. 

To your point, the ranking system COULD implement a 1-100 scale in the back end, with a “min threshold” settings for end users. (Though some WOT ranking tools may only output a simple Boolean) IMHO, the trust ranking presentation itself (in the UX) should be Boolean (trusted or not, for specific apps/use cases) and could be determined by the user’s “min threshold” setting. 

My random thoughts. But bottom line… WOT needs to at least be extendible and have a consistent UX defined by NIP.