Here is my 2 cents for the questions:
1. App developers often make bugs, and for list events it means that it's easier to unwittingly erase the list if the user wants to add a new pubkey and cannot find the previous event on the network. Apart from that, it's easier to build graph from individual events (as VictorPamplona mentions in the github issue).
1.5 Replaceable events or static events? - Replaceable events can be updated, if trust relation changes over time. Also, event deletion seem to be not well implemented in relays.
2. Another model is private to the author + shared by relay/DVM when needed. I think the common sentiment is that we all see usecases for private trust events, but implementation is more complex. We need interactive protocol for this - as Hodlbod mentioned - or ZK proofs, and who knows what. For now, I think we should go for public, discover the possibilities and implement private model when we know more about the problem / we see what the main usecases are.
3. I would vote for an optional scalar "score" and "confidence" value. If they are absent, it's a binary true value.
4. I like the "context" term. I think it could be optional, but see a lot of value in embedding into the event, so as it can be used by filters as input. I don't see clearly what is the usecase for it derived by filters. Is it like "I trust this guy", and the algorithm tells me the context it was used in?
5. I'm probably not the one who has dig into it deeply. Not sure if this is what you mean by spec, but I'd like to know: what are the inputs/outputs, where the filter should be ran (client / relay / DVM / phone) and when (when content is created OR when queried). Should I be able to make a relay subscription based on a content filter?
This seems like a halfway to federation approach which I have no issue with. My issue is that access to the data on relay should be inaccessible by relay operators unless specifically granted by user for specific time and task. This probably wont happen though since it would imply moving the filtering / trusting capabilities to the edge by necessity and i dont see relay operators wanting to give up access to peoples data.
Anything less than this while relying on relays leaves the door open for abuse. doesnt matter if the current devs and relay runners are saints. We dont know who comes next. its a common problem. once sysadmins, dbas, SLA holders get their hands on data they cant help themselves. eventually they start poking around. they conflate "their server" with "their data"
My true concern is that out of expedience we are setting the groundwork for future abuse and social engineering.
Good luck as you proceed.
I will watch with interest.
Dude. You have awesome feedback. And now I have even more questions… so no … you don’t get to just “drop a comment and go”.
- first … actually nostr already has a lot of public data for anyone (relays admins) to use in developing algos and such.
- encrypted events are becoming a thing in Nostr. These “gift wrapped” events are mostly opaque to relay operators, which will complicate everything related to server side data processing.
- Client side data processing of encrypted events will still be possible … and Nostr SHOULD have NIP in place for “privacy agreement” or something that keeps clients transparent and honest (like a checkbox “allow client side only processing of private DMs to improve WoT score…”)
- given all this… private “is trusted” events will be the first challenge to design an implementation standard for. Yes. These will need to be encrypted (who is trusting who) AND available for server side processing (who’s in my trust network) … I’m “confident” this is doable.
So yea… let’s be clear and lay solid groundwork to let WoT on Nostr be sovereign by design.
Unfortunately I am spending all my extra cycles on learning market stuff and dealing setting up the family migration out of Canada rn. When I get through this cycle ill get back to active participation in projects as funder or dev or both. for now though my plate is full so I have to satisfy myself with yelling from the audience.
Ill jump on stage when time allows and if there is no one else to do the specific thing. Other than that. am jus pleb 4 now ser.
Just so you know … we’re actually on the same page here. WoT on Nostr can and should be sovereign. Only we can make it so. I’ll keep you posted.
My preferred solution is similar to 2 but instead of DVM we use local llm.
Local LLMs could be considered in the NIP design, but the NIP itself shouldn’t care (or specify) HOW or WHERE a “WoT Filter” algo should be run. A free market of choices for publishing and subscribing to “WoT filters” is how sovereignty wins.
(A “WoT Filter” could be anything that processes Nostr data to determine “trust rank” or “recommendations” for users or content.)
true if there is an overarching narrative or design philosophy that pushes towards decentralized as the default path. We dont currently have that. All the filtering projects that I have seen have been focused on the central utility we call a relay. Its expedient. Its dangerous.