Oddbean new post about | logout
 I wonder if you can plug it into llama on nostr:npub1aghreq2dpz3h3799hrawev5gf5zc2kt4ch9ykhp9utt0jd3gdu2qtlmhct or nostr:npub126ntw5mnermmj0znhjhgdk8lh2af72sm8qfzq48umdlnhaj9kuns3le9ll on a system with some GPUs and use it to train data. I have not looked at the llama or the implementation of it on these systems, but if it is based on llama and faster, I bet there is a good use case for optionally running it. Do these installs allow training or are they just running models?