If we want to have user specified trust score for every person they interact with, how do we avoid Google Circles fate? The UX needs to be super simple, intuitive and fast. What sort of labels might we categorize users under?
Who gets the best WoT score and why? Same question for the worst.
We could optimize by happiness and how someone makes you feel - but then we’re just making bubbles 🫧
I only have 2 feelings, hungry and angry 😉
Just spitballing, but interaction mutuality can be a proxy for trust, where the more balanced two users' comments, reacts, and zaps are between them, the more you can assume they trust each other. Other relationships are often more one sided, where the trust is one directional. There could also be something like negative trust that is found by a lack of interaction between two people who have many mutual connections.
Isn’t that what Twitter does with “show me less” option ?
I'm not sure, but it would be cool.
Applying a single trust score to all content from a person is like trying to apply a single interest rate for every loan in a country.
So you’re saying it should be per piece of content? That’s a lot of upkeep.
I’m already worried that specifying your own trust score might be destined to fail
Every user exists on a spectrum of engagement & activity. So any WoT will need user input, and I think profile level labels might be too broad, but really, most users won't engage at all. Unless... "trust engagement" is made a first class citizen of consuming a post. Posts are immutable, but people change all the time, so the trust should be tagged to posts to be most effective. Naively, a swipe left/right feed could maybe generate some good insights, as users are forced to make a binary choice at every post 🤷♂️
I think what you’re leaning towards is dynamic trust scores based on content not user. But, this just ultimately boils down to a user score. If people like your content and don’t forget to label it, it would just add to your credibility or subtract from it.
A content score doesn't "boil down" by itself to a user score. Users or clients who want a simpler WoT can boil it down and lose some precision in their modeling, but settling for less granularity across the board seems retarded in the literal sense
Maybe it's a spectrum from low to high [trust?] based on how many common people you and I interact with and how frequently we interact with them. Also based on whether we have frequent and direct interactions with each other (DMs, comments). On the user side, we wouldn't have to do anything. It could just be a calculated data point represented visually for users to see on each other's profiles.
This might be the only realistic way to do it. Twitter had a show less option and maybe that was their wot.
That might have fed to their content algorithm and used as an input to show us a more relevant feed (eg show less people talking about diet and politics). I thought this would be more of an indication for users to quickly view whether this person is a bot or untrustworthy scammer. Have you heard of hive.one back when it was up and running? It was ranking twitter profiles within a group/community we belonged to based on common relationships. Might be interesting to see which data points they were pulling.
Could stats not be leveraged to signify interaction patterns, frequency, types ? Like, go to stats page, says this user chats with 10 users in common, made x pull requested, donated to n projects etc?
1. People I have met in person, and trust 2. People I have met in person, but don't trust 3. People I haven't met I don't think I need more than that.
I'm working on a highlighter post that recontextualizes this idea for Nostr, but here is the outline for a system I was working on for Urbit: https://gist.github.com/vcavallo/e008ed60968e9b5c08a9650c712f63bd