LoRA

Low-Ranking Adaptation of Large Language Models is a training method that improves the creation of models based on other larger models.

It speeds up training and consumes less memory when dealing with large models.

It adds pairs of rank-decomposition weight matrices, aka update matrices, to existing weights. It only trains those newly added weights.

Works best for subjects like faces!
Try training a Checkpoint for themes.

Benefits: