ComfyUI — LORAs
What they are and how to use them
What is a LoRA?
- LoRA = Low-Rank Adaptation. A small add‑on that nudges a base model toward a style or subject.
- Tiny files vs full checkpoints; stackable; adjustable strength per use.
- Place files in ComfyUI/models/loras.
Using LoRAs in ComfyUI (stock nodes)
- Load base model (SD 1.5 or SDXL).
- Add a LoRA loader/apply node (e.g., “Load LoRA” / “Apply LoRA”): connect model and CLIP.
- Set strength: UNet (main effect) and optionally CLIP (text influence). Start around 0.6–0.8.
- Encode prompts with CLIPTextEncode as usual; sample with KSampler.
Inline prompt syntax (A1111 style) vs ComfyUI
A1111/Forge supports inline tags like <lora:name:0.8>. Stock ComfyUI does not parse these tags in the text prompt. Use LoRA nodes instead. Some community nodes can parse A1111‑style prompts, but this is optional.
# A1111-style (not parsed by stock ComfyUI):
masterpiece, portrait, <lora:ponyXL:0.8>, warm light
# ComfyUI approach:
[Checkpoint Loader] -> [Apply LoRA (ponyXL, strength=0.8)] -> model/clip -> [CLIPTextEncode(prompt)] -> [KSampler]
Tips
- Match LoRA base (SD 1.5 vs SDXL) to your checkpoint.
- Too strong = artifacts or overbaked style; tune UNet and CLIP weights separately.
- Multiple LoRAs: apply in sequence; balance strengths to avoid conflicts.
- Some LoRAs expect specific VAEs or negative prompts (check model notes).
Train your own (quick view)
- Data: 20–200+ images; caption them (filename or .txt sidecars).
- Base: pick SD 1.5 or SDXL to match your target usage.
- Tooling: kohya_ss (GUI), diffusers+PEFT, or ComfyUI LoRA training workflows.
- Key knobs: rank/dim, learning rate, epochs, resolution, repeats.
- Validate frequently; stop when it generalizes without overfitting.
Resources
- CivitAI — LoRAs by style/subject (check licenses).
- kohya_ss — popular LoRA training toolkit.
- Hugging Face — datasets, LoRA repos, guides.
- ComfyUI-Manager — install nodes/utilities that add LoRA helpers.