So...
Let's suppose a relay operator "filters" out spam--users are happy, and they sign up for that relay.
And it pospers--in fact, it becomes one of the most utilized relays on Nostr...
And...then let's suppose that relay operator at some point in the future decides that "Hunter's Laptop" is disinformation, and for the good of his users he begins "filtering" that...but...he never tells his users.
You can (of course) think of many such examples.
And yes in this model what and how to "filter" becomes the choice of each relay operator...and yet (if so) it also then becomes the RESPONSIBILITY of those relay operators to act altruistically, and to not become individual arbiters of truth.
This then becomes the proverbial "slippery slope"...
And while advocates would say "I'd never censor 'Hunter's Laptop'" the unfortunate truth is (likely) that some relay operators will be tempted to inject their own biases into their relays.
How will relay users then know what's being "filtered" (censored) by the relay operators? Or will users need to blindly trust those operators to not censor something else? And isn't that exactly what happened with Facebook and Twitter (and why Nostr was "born" in the first place)?