-
Notifications
You must be signed in to change notification settings - Fork 6.4k
Enable dreambooth lora finetune example on other devices #10602
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
|
LGTM |
|
Hi @sayakpaul . Would you please review this PR? Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for contribution! I just left some comments, LMK if they make sense.
| torch.cuda.empty_cache() | ||
| if hasattr(torch, "xpu") and torch.xpu.is_available(): | ||
| torch.xpu.empty_cache() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above.
|
Hi @sayakpaul . I have fixed your comments. For mixed-precision, I am not sure because we have |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one comment and we should be good to go.
Have you verified if it works effectively? If so, could you share some results?
What do you mean by this? I don't see the dog from the training set appear here. Am I missing something? |
Sorry for I didn't upload the cuda result image successfully, I will run again to get the result and give you feedback ASAP. |
|
Hi @sayakpaul , here is the cuda result runs on 2*A100 cards |
|
And how about the XPU result? |
|
Nice, this is good. |
|
The failed test seems unrelated to my changes. Please let me know what needs to be changed or request any other reviewers before merging. Thanks! |
|
Thanks much! |
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>



This PR mainly changed 2 points to enable the example on other devices:
torch.cuda.amp.autocast()bytorch.amp.autocast(device)so other devices can also use it