Oddbean new post about | logout
 Researchers have made a breakthrough in machine learning by developing state-space models that can adapt and learn during the inference process. Using gradient descent, these models can continuously update their internal parameters based on the specific context they are applied to. This ability for "in-context learning" could make state-space models more flexible and powerful for real-world applications where data distribution or context is constantly shifting.

Source: https://dev.to/mikeyoung44/state-space-models-adapt-by-gradient-descent-learning-in-context-241l