Thanks for a reasonable and nuanced take as always Derek. 🫂💜 To answer your question, Primal users can choose whether they want to see NSFW content in their feeds and/or in the trending side bar. The default is that NSFW content shows in the feeds, but not in the trending side bar. This is an easy setting for users to flip. We are working on a more sophisticated solution where NSFW content will be categorized on a per-note bases (not per-user basis). @𝓞𝓷𝔂𝔁.𝓥𝓲𝔁𝓮𝓷 is a perfect example of an account that posts lots of interesting content with a few occasional NSFW notes. Our new system would make it possible for her regular content to trend by default. A lot of criticism we get can be summarized as "why isn't Primal perfect today?!". We are building and improving things every day. Everything we've done has always been transparent. Happy to engage in good faith, but Semi obviously hasn't been willing to do that. Faking this advisory to generate outrage is obviously over the top. Excusing his behaviour looks a bit weird to me tbh.
The Derek Beer Index is always accurate. 🍻🍻 You get a lot of criticism and shit and always handle it in a classy manner. 🫂🫂 I look forward to the future per note filter enhancements for trending. I knew you'd get around to it, given enough time. Thank you. 🫂🫂
I think when we spend the bulk of our time online here we tend to miss the big picture. We are a network of around 10,000 active users, on a GOOD day. The protocol isn’t perfect. The clients aren’t perfect. The overall UX is still messy and confusing. We’ve barely gotten started, we’re not even close to being ready for the world to show up. We have to give people room to try things, get them wrong, and try something else without being mean to each other in the process. That goes against everything we embody.
Being the author of slidestr.net AND posting some NSFW content on my photography profile, I know this issue from both angles. Public web clients need some kind of default filter. Using ML seems to be a good approach but we must also accept that this will never be solved perfectly. In a year from now we might discuss what is acceptable or find users try to game these systems. With more video content the classification will also become more complicated and costly. IMHO filter choices should be part of an onboarding process where a user selects their interests and could also decide on a NSFW filter explicitly. It would be better to mark/blur content than to hide accounts completely without the user knowing. If there ever is a „suggested accounts for #photography“ list, I‘d like to be on it and not be omitted by some default behavior.
Rabble mentioned this for content categorisation https://nostrcheck.me/media/public/4582727a23026067da6e686caa5370753237bc4f340b9a56e1f18a5f6f9d64ee.webp