It works really well for files under 100KB. We are waiting for new Relay codebases to better manage large blobs. I think this is inevitable. But it doesn't work well on any of the current relay implementations.
I’ve got an experimental implementation of multipart nip-95 files working. Hope to cleans it up and have it on my server for folks to check out this week.
Nice! Make it super optimized for blobs and people will pay attention. We can have specialized relay implementations (Like a Preferred relay list for blobs) to be used only with these event types if necessary.
It’s not yet super optimized. Making sure the rules around accepting such messages and reassembling them are working well first. Optimization of the implementation itself can come in a later iteration, without changing the public interface. Plus, any implementation details there will probably be specific to the relay implementation. Happy to optimize mine to be an example to follow, but I don’t imagine it’ll just be a copy/paste to add it to other relays. The ideas I have for optimization are basically: deconstruct the kind 1064 JSON messages so you can store the blob as binary instead of base64. (This is done transparently on the server. It still has to reconstruct and serve text JSON events when they’re requested, since Nostr protocol requires it.) You can do that at the event level and save some space. And it’s the simpler implementation. But, if you expect people might re-post the same content, an implementation that can reassemble the full file and store that in a content-addressable store would be even better. But that gets more complicated with multipart files. (Can’t decode the whole blob and verify and store it until all messages are present.)
Alright, #nostr / #nostrdev folks, I've got my implementation of multipart NIP-95 files up here: https://github.com/NfNitLoop/yastr/blob/main/src/server/nip95.rs (Yes, it's very messy. I just saw all the TODOs which I've already done. 😆) And I've got a little tool to upload multi-part files here: https://github.com/nfnitloop/nostr-cli/ (The `nt upload` command.) Since clients might not know how to put together multipart files yet, my relay (wss://www.nfnitloop.com/nostr/) also makes them available via HTTP. For example: https://www.nfnitloop.com/nostr/files/bee3454df946e79724b0ec972242ba2d2e3a1fc185931a7a45def81a5ffb194c/ There is room for space-saving in the storage implementation. If this takes off I'd want to store the BLOBs in binary instead of Base64. And I'd probably want to dedupe whole files so that we don't end up having to store multiple copies of them if multiple people upload them. BUT, that can come later without changing the public interface. Before *that*, I want to implement HTTP range requests to show that the blockSize metadata allows a client or server to efficiently fetch bytes from a known offset. That way you could, for example, scrub to a certain position in a video without having to download all the bytes up to that point.
And now I’ve added HTTP Range support. ❤️ to the axum-range crate which made this much simpler. (Though I did have to delve deeper into Rust async than I have before. But I was happy to learn more there.) Here’s a sample video which I saw making the rounds on Mastodon earlier this week: https://nfnitloop.com/nostr/files/fb8e82bed22cf9c8ee39ef2898970beee6fec7ca9068e1506a061f22f5ec1ae7/ Note that only the first part of the video is fetched. You can jump past that and the server will start loading from the exact point you jump to. The algorithm the server uses to answer these range requests could also be implemented client-side. (Say, as the Blob interface in the browser.) I’m just not implementing a client. (Yet. 😉)