well, on the basis of its raw throughput in my bulk decode encode decode test it's all about the same
probably the reason is that my data structure is more suited to protobuf than benc, so even though protobuf does reflection the overhead does not make it slower
they are basically the same for this use case
the throughput is about 58mb/s for a single thread unmarshal from json, encode to binary, decode from binary, about 30mb/s to do that plus check the ID hash and encode/decode through json
my JSON encoder is the bomb for this shit...
if it wasn't for it being a bigger data size i'd say this is the way to go
in any case, i'm leaving the code in there for the different binary codecs in case for some reason it seems like a good idea to work with them later
the big problem i foresee is that to make it go any faster i need to adapt the runtime data structure of my events to BE the benc encoded version, which uses slices wrapped in structs, both the protobuf and benc versions have this problem of needing a shim to change the data structure, and probably that overhead is the last bit... so, i'm just leaving it with protobuf for simplicity