Oddbean new post about | logout
 there is nothing wrong with RSS. it has served us well for nearly 25 years. but there are several points worth considering. for this example, I will specify a static html site, something like that generated by Hugo, where the author writes markdown, compiles the static files (including an RSS XML), and then publishes them to a web-server.

RSS:

1) is based on polling, which adds a non-zero cost to the host in having to handle a constant barrage of requests (assuming some popularity...), and also requires that the source be online to serve the content. obviously there are ways to deal with all of this, but the naive setup leads most implementations here.

2) has a limited capacity. in order for a subscriber to have access to content that is not included in the XML, some other means is required. hence, we end up with centralized solutions like feedly and the like. again, this is not a huge problem, as the content of the XML can be limited to URLs and descriptions. a standard approach to RSS implementations is to only include the most recent n items.

3) has a single point of failure, the file server that returns the XML, which is probably the same server that makes the deep linked content available as well.

4) leaves many traces indicating one's interest in the content. there will be a DNS query for the domain the XML is hosted on, and more queries for retrieving the content.

again, these are not horrible tradeoffs for something so valuable, and I use RSS every day to great effect. but #nostr brings some interesting features to the table to address these points. the equivalent in nostr would be an npub that publishes the same markdown files that went into the hugo implementation as NIP-23 Long Form notes.

1) push-style delivery. near-instantaneous delivery to clients with open connections, piggy-backing on network connections that are already there (assuming you are using nostr for anything else).

2) capacity is hard to compare directly since they are different modes of access, but assuming you are reading notes via a client that supports it, you can query the indicated relays for any other long form notes from the same npub to see all content previously published, rather than relying on only those items which are present in the XML. 

3) the long-form notes that make up the content of the nostr version of the feed could just as easily disappear, or be published to a single relay only and forgotten, but much more likely is that the notes end up on several large nodes, and get spread hither and yon from there. and since the notes serve as notification AND content, being notified of content and having the content are the same. 

4) again since the note contains the content itself, the only traces available to the networks you are traversing to talk to relays are that you connected to a relay, not which npubs you searched for or were delivered.

I'd like to know what you think the "deep and fundamental flaw" in what I'm saying here is. there's no magic, just better mechanical sympathy between the use case and the implementation, in my opinion.