It’s not yet super optimized. Making sure the rules around accepting such messages and reassembling them are working well first.
Optimization of the implementation itself can come in a later iteration, without changing the public interface.
Plus, any implementation details there will probably be specific to the relay implementation. Happy to optimize mine to be an example to follow, but I don’t imagine it’ll just be a copy/paste to add it to other relays.
The ideas I have for optimization are basically: deconstruct the kind 1064 JSON messages so you can store the blob as binary instead of base64. (This is done transparently on the server. It still has to reconstruct and serve text JSON events when they’re requested, since Nostr protocol requires it.)
You can do that at the event level and save some space. And it’s the simpler implementation. But, if you expect people might re-post the same content, an implementation that can reassemble the full file and store that in a content-addressable store would be even better. But that gets more complicated with multipart files. (Can’t decode the whole blob and verify and store it until all messages are present.)