-
Notifications
You must be signed in to change notification settings - Fork 6.4k
Description
Describe the bug
The latest advancement in Wan has been self forcing loras, which allow to get extremely good results in just 4 steps.
Although the comfy community was successful in using the Lightx2v cfg step distill Wan2.1 loras on Wan2.2, I can't apply them to the trasnformers in any way.
The suggested combination right now is to have the same lora applied with a weight of 3 to the high noise and 1.5 to the low noise.
both using the transformer prefix and None doesn't work.
transformer_high_noise.load_lora_adapter(lora_path, prefix="transformer")
transformer_high_noise.set_adapters(["default"], weights=[3.0])
# Load LoRA on low noise transformer using load_lora_adapter
transformer_low_noise.load_lora_adapter(lora_path, prefix="transformer")
transformer_low_noise.set_adapters(["default"], weights=[1.5])
this yields:
No LoRA keys associated to WanTransformer3DModel found with the prefix='transformer'. This is safe to ignore if LoRA state dict didn't originally have any WanTransformer3DModel related params. You can also try specifying prefix=None to resolve the warning. Otherwise, open an issue if you think it's unexpected: https://github.com/huggingface/diffusers/issues/new
and with prefix = None
Reproduction
import torch
from diffusers import WanTransformer3DModel
from huggingface_hub import hf_hub_download
# Load a basic transformer model
transformer = WanTransformer3DModel.from_pretrained(
"Wan-AI/Wan2.2-I2V-A14B-Diffusers",
subfolder="transformer",
torch_dtype=torch.bfloat16
)
lora_path = hf_hub_download(
repo_id="Kijai/WanVideo_comfy",
filename="Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors"
)
transformer.load_lora_adapter(lora_path, prefix=None)Logs
Traceback (most recent call last):
File "/home/luca/video/wan2-2-i2v-a14b/test_lora_crash.py", line 18, in <module>
transformer.load_lora_adapter(lora_path, prefix=None)
File "/home/luca/video/wan2-2-i2v-a14b/.venv/lib/python3.12/site-packages/diffusers/loaders/peft.py", line 253, in load_lora_adapter
lora_config = _create_lora_config(
^^^^^^^^^^^^^^^^^^^^
File "/home/luca/video/wan2-2-i2v-a14b/.venv/lib/python3.12/site-packages/diffusers/utils/peft_utils.py", line 320, in _create_lora_config
lora_config_kwargs = get_peft_kwargs(
^^^^^^^^^^^^^^^^
File "/home/luca/video/wan2-2-i2v-a14b/.venv/lib/python3.12/site-packages/diffusers/utils/peft_utils.py", line 158, in get_peft_kwargs
r = lora_alpha = list(rank_dict.values())[0]
~~~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of rangeSystem Info
- 🤗 Diffusers version: 0.35.0.dev0
- Platform: Linux-5.15.0-136-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.12
- PyTorch version (GPU?): 2.7.1+cu126 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.34.3
- Transformers version: 4.55.0.dev0
- Accelerate version: 1.8.1
- PEFT version: 0.16.0
- Bitsandbytes version: not installed
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB - Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no