I’ll start with Question 4, bc this is the question that deserves the most attention right now.
Question 4: “context of trust” (I trust so and so for X but not Y so much) should be embedded into trust attestations … or derived at “runtime” when content filters are applied?
Answer: CONTEXT MUST BE EMBEDDED EXPLICITLY INTO TRUST ATTESTATIONS. This is what we must all be shouting from the mountaintops if we want to move forward on WoT. Deriving context at runtime doesn’t make sense to me. How can an algorithm derive that I trust Alice in some niche context, like to comment on the symbolism of trees in 18th century French literature? What if Alice isn’t on social media, doesn’t have any posts on this niche topic for me to like or zap or whatever? What if my trust in this context is based on real world interactions that have no digital footprint? I WANT THE OPTION TO SAY WHAT I MEAN AND MEAN WHAT I SAY, flat out, for any context that matters to me.
I’m not saying that we can’t ALSO use proxy data or algorithms or AI or filters to derive context. But are the derived contexts trustworthy? Maybe they are, maybe they’re not. Maybe I trust them, maybe I don’t. So let’s offer BOTH approaches: explicitly-stated contextual trust, AND algorithmically-derived contextual trust. I predict the former will turn out to be more useful, but no reason we can’t just use both methods and find out.