Researchers at XJDR have developed a new project called Entropix, aimed at improving reasoning in language models (LLMs) by introducing adaptive sampling techniques. This innovation focuses on identifying moments when LLMs are uncertain and adjusting their predictions accordingly. Two key metrics used to measure uncertainty are entropy and varentropy, which assess the distribution of predicted logits. These metrics enable Entropix to adapt its sampling approach based on the level of uncertainty. Source: https://www.thariq.io/blog/entropix/