I wonder if you can plug it into llama on @umbrel or @Start9 on a system with some GPUs and use it to train data. I have not looked at the llama or the implementation of it on these systems, but if it is based on llama and faster, I bet there is a good use case for optionally running it. Do these installs allow training or are they just running models?