Oddbean new post about | logout
 yeah that would save a lot of space! its just a nice compression mechanism. next step would be to find the probability distribution over words and represent an article as prefix-coded bit vector, huffman style :P 
 this would effectively be a compressed database. I like this idea a lot. I kinda want to try it... but the database would be completely different.

but tbh, most the storage space seems to be from keys and signatures anyways. i guess you can have a dictionary over pubkeys, but .... eh 
 Yeah, the dictionary of keys and IDs was the first thing I did on Amethyst. It's why our memory use is minimal. There are no duplicated bytearrays anywhere in memory. 
 Bonus points if you do all of that WHILE also storing the markdown/ascii doc/Nostr node information. 

For instance, the database would store an nprofile word, but you have to load the user, write the user's name, with a link, and then run the muted words procedure against the user's name, not the nprofile string :)