Users define what relays they read from and write to in their relay list. Then clients know where to send notes when tagging specific people. Clients then also know where to read notes from when pulling their profile feed.
Timelines are a bit more tricky: you have to gather everyone you follow’s relay list somehow, then find the common relay subsets among all of them:
Alice, bob and jack are writing to relay A
Tom, Jerry are writing to relay B
phil is writing to A,C
Then your timeline feed is:
subscribe on A to alice,bob,jack,phil
subscribe on B to tom, jerry
With an algorithm that minimizes the total relays you connect to.
You’re also hoping these relays are reliable and have decent uptime.
Overall it might save a bit of upload bandwidth when querying, but it makes your client connect to a bunch of random relays that other users have defined in their profile. Many of which might be offline, unreliable or malicious. So you need to make sure your client is hardened against bad relays.
The dumbest way of doing it is damus’ original model that just reading and writing to relays in your relay list. It’s much simpler client-logic-wise but you have to manually add/remove relays to stay overlapped with your friends.
This is a very good explanation. Thanks!
Thank you for explaining.
Regarding the connection to the unreliable and malicious relays what do you think about some sort of reliability checks on relays as an addition? (Historical performance, uptime, responsiveness etc. )
yes I think that would be vital for implementing this in a way that would produce reliable outcomes.
smart users can easily find/ignore bad actors - agency relays
dumb users need / will stick to default as always.
no need historic perf. - similar WoT or bitcoinmint rating or trust model. nostrudel has RELAY REVIEW section before.
what if SWAT team comes pull gun on RELAYADMIN HEAD ask him handover all realy log n record ? History of relay doesnot mean it will remain same relay forever
HTTP doesn't have any redundancy at this level. Redundancy for websites means using multiple servers on the same IP address usually through network routing tricks. DNS round-robin for example doesn't quite work.
And yet HTTP servers seem to be doing just fine. Relays will become as reliable as websites over time, there is no reason they shoudln't be.
Plus then we have our redundancy of specifying multiple inboxes or multiple outboxes.
So I don't think there is a long-term problem with reliability of relays.
Currently though, sometimes all of somebody's relays are not working properly because nostr is still a work in progress.
Yeah I also think that those who run relays on here not for fun but to support network will be doing so. So, it should be more stable over the time. Regarding malicious stuff I’m not sure though. How to deal with that ?
Advanced users should always able to PICK their OWN relays.
yes there will be bad boys and agency controlled relays admins too.
Entry level users will always stick whatever DEFAULT is given to them by the app or portal.
If I understood it properly: the scenario of malicious relays (planting a relay in a profile and baiting a user to tag in order to get their ip address) is something first time users (or non tech) might not be aware