-
Notifications
You must be signed in to change notification settings - Fork 6.4k
Closed
Closed
Copy link
Labels
bugSomething isn't workingSomething isn't workingstaleIssues that haven't received updatesIssues that haven't received updates
Description
Describe the bug
Like cross_attention_kwargs in UNet, I want to modify the attention processor of the FLUX model, and pass the extra parameter by the joint_attention_kwargs which written in the FluxPipeline:
noise_pred = self.transformer(
hidden_states=latents,
timestep=timestep / 1000,
guidance=guidance,
pooled_projections=pooled_prompt_embeds,
encoder_hidden_states=prompt_embeds,
txt_ids=text_ids,
img_ids=latent_image_ids,
joint_attention_kwargs=self.joint_attention_kwargs, # here
return_dict=False,
)[0]But it doesn't work. I read the resourse code and find that the joint_attention_kwargs is not passed to the inner blocks of the transformer.
we can find that joint_attention_kwargs is missing!
Reproduction
not needed
Logs
No response
System Info
diffusers 0.31.0.dev0
Who can help?
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingstaleIssues that haven't received updatesIssues that haven't received updates