Oddbean new post about | logout
 Same issue for me 100% of the time I try to put it on a network drive, so fucking annoying  
 I have it on a brand new 1tb ssd and it still fucked up. 
 And nothing involving network drives? 
 god no 
 maybe I need to start zfs snapshotting so I can at least rollback to fix leveldb corruption 
 or just put it in a ext4 partition like our anchestor used to do 
 zfs is an abomination from Sun Microsystems Solaris OS which is based on BSD

BSD also has a shitty, similar thing that ZFS is an quote "improvement" on

you can't use mount, umount, or fsck... it's all some bullshit zsomething this zsomeotherthing that 
 have fun when the filesystem itself needs to be accessed outside of the boot environment because 

for some reason, these BSD (and thus solaris) unixes aren't up to speed on making disk mounting and fixing easy, you know, mount... umount... fsck...

these tools won't help you with zfs and ufs

its probably the biggest reason why nobody uses BSD or solaris anymore

it's not intuitive at all 
 no, what you need to do is stop the daemon, duplicate the state of it, and restart it, and if it fucks up you just zap the old version, copy back the new version, and wait an hour or two and it's back 
 i use badgerdb to store the actual events as values in keys in my database, but they are rarely over 500k... bitcoin blocks are 2-4mb typically, honestly, they should not be stored in the DB, the differential is the age old stacking problem of slivers and chunks that used to be a big problem with networks until about 10 years ago

if i was going to write a database driver for btcd i'd use badger for the indexes, and store the blocks in a flat filesystem named by the block hash... they are too big

or, it might work with badger, because badger actually stacks the values in one file and the keys in another...

anyway, my point here is that the entire filesystem of bitcoin's leveldb mutates so much you can't really snapshot it properly, they randomly change half the dataset for whatever reason and when you use `cp` with `-rfvpu` which retains the perms and only copies files that have been changed... it still copies all of the files because all of them have been changed so, yeah

it's dumb because bitcoin values are mostly just the blocks, and the indexes are mostly just keys, so having them separated would actually make backup time-effective instead of a collossal pain in the ass 
 i use badgerdb to store the actual events as values in keys in my database, but they are rarely over 500k... bitcoin blocks are 2-4mb typically, honestly, they should not be stored in the DB, the differential is the age old stacking problem of slivers and chunks that used to be a big problem with networks until about 10 years ago

if i was going to write a database driver for btcd i'd use badger for the indexes, and store the blocks in a flat filesystem named by the block hash... they are too big

or, it might work with badger, because badger actually stacks the values in one file and the keys in another...

anyway, my point here is that the entire filesystem of bitcoin's leveldb mutates so much you can't really snapshot it properly, they randomly change half the dataset for whatever reason and when you use `cp` with `-rfvpu` which retains the perms and only copies files that have been changed... it still copies all of the files because all of them have been changed so, yeah

it's dumb because bitcoin values are mostly just the blocks, and the indexes are mostly just keys, so having them separated would actually make backup time-effective instead of a collossal pain in the ass 
 lol, the events are stored in values, the indexes in keys

and yes, this would instantly solve the problem of bitcoin

if i had any time spare i'd make a PR to create a btcd database driver with badger because it would probably fix most of its slow IBD 
 most SSDs now error instead of returning garbage most of the time also if they detect a checksum error