Just because one leaves a trail of breadcrumbs does not mean they are hungry.
Just because the data is there does not mean it has to be used to manipulate the origin of that data.
The ad services are free to scoop up this info and do with it as they please. They could be doing it now. That (some) data is there is completely moot to the question of whether the community is better served by handing over permission to be manipulated.
Thankfully, nostr inherently gives users the absolute ability to never lose control over their experience.
The larger question isn’t whether ad-driven clients should be created, imo. It seems it should be “how can nostr protect users from the inevitable onslaught of noise (dms, comments, etc) that will come from amoral actors. A question that has been here the whole time.
I would like to see more brainstorming on this notion, first.
Muting isn’t going to be sufficient. Imagine ddos levels of noise being generated.
All I can think of is super strict modes of “only people I follow and that they follow”, web-of-trust style. But that breaks down when money gets involved (influencers).
Yes. Mute and follow only go so far. An explicit “is trusted” marker will be needed to implement Web of Trust over Nostr. Making “is trusted” markers private will help keep influencers and money out, but these and bots will always be the challenge of better WoT tools.
The pioneering of WoT for social media is Nostr’s game to loose. Along with “crowd recommendation” of content and follows (and “other stuff” users want) “trusted” advertisers and products will also thrive on the Nostr network.
nostr:note1ta9gzmtewn7jnslhjd8ew9322qp3rnyc56e9a0rj9f0x2ftndxhqcy7dwt