Doing as little as possible in the runtime makes a lot of sense. Less overhead, less to maintain, etc.
As for the freshness, I was originally thinking perhaps latency sensitive function calls might pose a problem if the process crashes and recovers seconds later and responds with data that’s no longer relevant. Given these are pure functions it makes total sense to just pass along the timestamp (or increment a counter) and these considerations are going to be app dependent anyway.
I’m really excited about this project and plan to experiment with it in the near future. The docs are well written too. Thank you for the work and the thoughtful response!
Makes sense!
Thanks for the kind words; we've done a big push on documentation lately so it's nice to see it paying off :)
What's your technical background? Functional programming? operating systems? blockchains? kernel work? all / none of the above?
Feel free to message me on nostr or at vinney@vaporware.network if you ever have technical questions or want to bounce ideas. we also have a Telegram linked on our site and github.
My degree is in applied math so I’ve had to lazy load a CS degree over the years. I’ve been a backend engineer my whole career though. At times doing machine learning but these days it’s mostly “medium data” (2-3 PB) pipelines and storage solutions. No kernel work although I’d like to get my hands dirty with eBPF and the like.
I’ll definitely be reaching out as I’m playing with Pallas! My dream is to have something like this as the foundation for a decentralized AI model where, for example, anyone could download a binary and contribute to training and/or running a very large LLM running on Nostr. Perhaps a combination of OpenDiLoCo [1] and Pallas could work. The idea being that millions of people collaborating to train and run frontier models is likely a better world than one where only a couple companies can afford the compute.
1. https://github.com/PrimeIntellect-ai/OpenDiLoCo
Sounds like we're extraordinarily aligned :D