Oddbean new post about | logout
 FORGIVE MY RANT … but I see TrustNet as little more than a glorified popularity contest. A less qualitative (measurable and meaningful) ranking system may actually be more valuable. 

Here are some random thoughts:

On qualifying relationships: 
- people don’t want to rank the “quality” of their relationships. ITS HARD!!
- people especially don’t want to come back AGAIN to update these scores (or even add scores for new friends). It will NEVER happen on a regular basis!! (Srsly?)
- Qualitative scores reflecting the “depth” of  ones relationship (acquaintance, friend, peer, partner) by definition need to be updated as one’s “perception” of the relationship changes. 
- If keeping these qualitative scores updated is required for the success of this trust ranking system, IT WILL FAIL to hold value for its intended purpose. 

On trust as a numeric scale :
- The descrete 100 value scale underpinning this “qualitative” score is not only ridiculous (nobody knows or cares about a 100 value scale) and meaningless (the increments as applied will still be arbitrary) it could also be detrimental for a trust ranking system. 
- Trustworthiness is not a scalar value. Humans don’t have “more” or “less” trust for each other. We either “do” or “do not” trust each other in specific cases. Because of this, ranking on a scale is prone to misinterpretation. 
- What translates well to a scale is popularity. “If my friends trust X (or if X has a bigger voice and reach) then I will give X a higher trust score.” Problem with this is that the value no longer represents individual trustworthiness. 

On quantifiable measures of trust :
- Quantitative measures can be used to determine trust. They don’t ALL have to be algorithmically derived. A mix of Hand reported and computer generated data may work best. 
- Digital identities may have “layers” of trust (distinct from physical interactions) that may be applied “each on their own” (in no particular order) to determine trustworthiness for specific interactions.
- One layer of digital trust may be verification of personhood. For some transactions, a real person is required.
- Another layer of digital trust may be verifying asset ownership. Is this the same entity that “owns” X, Y, or Z known digital assets?
- Another layer of digital trust may be verifying originality.  Does this account pretend to be somebody else and if so is it obviously a spoof? (This may be accomplished best by actual humans)
- Other layers of trust may exist, may be discovered, and may be applicable for web of trust implementations. For this reason, any NIP developed should be open to expansion. 
- Web of trust COULD be determined by discrete “flags” being applied (by humans and by algorithms) to a profile. Each verifies a specific known and measurable quantity. Together they “add up to” an overall “verified” or “trusted” visible mark (one or three different marks?) applied to profiles. TBD. 

We really should be discussing this in earnest (openly, but in a dedicated format). Decentralized WOT implementation will NOT ONLY be a prime differentiator for Nostr from other socials, but will ALSO be essential for Nostr’s success as a social network that is NOT overrun by bots and bad actors. 

Thanks. #rantover