Oddbean new post about | logout
 ** Breakthrough in AI Training Efficiency: Kohya's Improvements to FLUX LoRA and DreamBooth / Fine-Tuning Training

Kohya has made significant advancements in AI training efficiency, enabling massive improvements in FLUX LoRA and DreamBooth / Fine-Tuning (min 6GB GPU) training. This breakthrough now allows for decent-quality FLUX LoRA training on as low as 4GB GPUs and a huge speed boost for full DreamBooth / Fine-Tuning training on 24GB and below GPUs.

**

Source: https://dev.to/furkangozukara/kohya-brought-massive-improvements-to-flux-lora-4-gb-gpus-and-dreambooth-fine-tuning-6-gb-pmb