To your point, here’s the current summary of my approach: 1. Decide what question you want to answer. 2. Select sources of raw data that are 1) available to you and 2) relevant to the question. 3. Translate raw data into a format suitable for consumption by the grapevine algo. 4. Crunch the numbers. Suppose the question is to maintain a list of nostr users who are not bots. In step 2, you may decide that follows (and mutes, and zaps) are the best sources of CURRENTLY AVAILABLE data, so those are what you use today. But if tomorrow a better source of data becomes available, you can throw that in the mix to improve the quality of the end result. And you can use multiple sources of data at the same time: no need to pick and choose. But you can and probably will do is to adjust the relative weights you give to each data source. So as your new sources of data become more and more available, you may want to decrease the “weight” you attribute to follows gradually towards zero. And indeed, as more sources of raw data become available, you may decide you want to alter the question from step 1. Not because you didn’t previously care about that question, but because you simply didn’t have any relevant data to work with. This, too, can happen gradually.
nostr:nevent1qqs0nwuu9fnex9r2jqu2aew8ar25js85msjrxckvk7e5v6tct68gzxgpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhsyg89yuk7j99axqt4t3pehz8xjkdy8jwjveyrruync50fc7v6z6ss9upsgqqqqqqswrr3se
I like it. I'm not sure how well that much granular control would work at a user level, but it sounds like a reasonable approach for relays to take. It seems to me we need elegant solutions at both levels.
Yea I agree and would add that the granularity would be obscured to the user, who would be presented with some sane defaults and friendly interface options (which control the granularity under the hood). Like when you enter a word into a "filter" setting field. You don't have to write the pattern-matching regular expression and detail which exact fields of a note you want to match on... You just write a normal word and hit save.
Or in many cases, you choose a client that has a lot of spam protection built in and you don't even know it. I would expect to see dozens of clients appearing that all handle WoT in different ways without burdening the user with too many options.
Agreed. This is inevitable. I expect it will also become more difficult to discern which of those clients are 'protecting' you from more than just spam.
Agreed, but that's also why Nostr is a phase change and not an incremental improvement. You can always exit a client with no loss of data or network.
I like it. I'm not sure how well that much granular control would work at a user level, but it sounds like a reasonable approach for relays to take. It seems to me we need elegant solutions at both levels.
Yea I agree and would add that the granularity would be obscured to the user, who would be presented with some sane defaults and friendly interface options (which control the granularity under the hood). Like when you enter a word into a "filter" setting field. You don't have to write the pattern-matching regular expression and detail which exact fields of a note you want to match on... You just write a normal word and hit save.
Or in many cases, you choose a client that has a lot of spam protection built in and you don't even know it. I would expect to see dozens of clients appearing that all handle WoT in different ways without burdening the user with too many options.
Agreed. This is inevitable. I expect it will also become more difficult to discern which of those clients are 'protecting' you from more than just spam.
Agreed, but that's also why Nostr is a phase change and not an incremental improvement. You can always exit a client with no loss of data or network.