i have used go to write complex dynamic adjustment algorithms that do EMAs and suchlike over prescribed time windows of data, structs are quite memory efficient and can map graphs quite nicely... the default normal size of array indexes is 32 bits so you can use them in place of 64 bit pointers in data structures, and quite efficiently represent lookup tables between longer values and these serial values
to do really big stuff you have to go with pointers but if you are gonna have under 4bln nodes in your graph you can do it more efficiently with arrays
i'm pretty sure this is along the lines of how 3d rasterizing mesh graphs are represented in memory, i know for sure that 32 bit integers are the main thing you deal in on a GPU (they only have 32 bit integer dividers, for example)
i'd say for the most part, 32 bit array indexes for nodes and 32 bit floating points are gonna be plenty of precision for your algorithms and it's really easy to do this in Go
also, just to explain something, arrays in go are like structs, they are constructed the same way as C but the dynamic sizing stuff is taken care of automatically for you, though if you really have a lot of growth in arrays you save a lot of time shuffling memory by over-allocating compared to the default (which is a doubling cycle iirc)
thanks sir, even though I didn't follow all of it, it's definitely useful suggestion