-
Notifications
You must be signed in to change notification settings - Fork 6.4k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
According to loading loras for inference an argument cross_attention_kwargs={"scale": 0.5} can be added to a pipeline() call to vary the impact of a LORA on image generation. As the FluxPipeline class doesn't support this argument I followed the guide here to embed the text prompt with a LORA scaling parameter. However the image remained unchanged with a fixed seed+prompt and a variable lora_scale. I checked the embedding values for different values of lora_scale and saw they did not change either. Does Flux in diffusers not support LORA scaling or am I missing something?
Reproduction
from diffusers import FluxPipeline
import torch
from PIL import Image
model_path="black-forest-labs/FLUX.1-dev"
lora_path="CiroN2022/toy-face"
weight_name="toy_face_sdxl.safetensors"
device = 'cuda'
seed = torch.manual_seed(0)
pipeline = FluxPipeline.from_pretrained(
model_path=model_path,
torch_dtype=torch.bfloat16,
use_safetensors=True,
).to(device)
pipeline.load_lora_weights(
lora_path,
weight_name=weight_name
)
prompt = "toy_face of a hacker with a hoodie"
lora_scale = 0.5
prompt_embeds, pooled_prompt_embeds, _ = pipeline.encode_prompt(
prompt=prompt,
prompt_2=None,
lora_scale=lora_scale,
)
image = pipeline(
prompt_embeds=prompt_embeds,
pooled_prompt_embeds=pooled_prompt_embeds,
num_inference_steps=10,
guidance_scale=5,
generator=seed,
).images[0]
image.show()Logs
No response
System Info
- 🤗 Diffusers version: 0.30.3
- Platform: Linux-6.5.0-26-generic-x86_64-with-glibc2.31
- Running on Google Colab?: No
- Python version: 3.10.15
- PyTorch version (GPU?): 2.3.1+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.23.4
- Transformers version: 4.44.0
- Accelerate version: 0.33.0
- PEFT version: 0.12.0
- Bitsandbytes version: not installed
- Safetensors version: 0.4.3
- xFormers version: not installed
- Accelerator: NVIDIA A100 80GB PCIe, 81920 MiB
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
Who can help?
No response
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working