-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Move torch.logspace to ATen and parallelize on CPU. #15438
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gchanan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
Performance comparisons: New: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gchanan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
| AT_CHECK(steps >= 0, "number of steps must be non-negative"); | ||
|
|
||
| if (result.numel() != steps) { | ||
| result.resize_({steps}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh, interesting that we're willing to write into any tensor that is correct numel. Well, I suppose it's handled correctly below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, it's strange but that was the existing behavior.
Summary: Pull Request resolved: pytorch/pytorch#15438 Reviewed By: ezyang Differential Revision: D13529626 Pulled By: gchanan fbshipit-source-id: 896e8afee3d6b5a706c4f5815b91ba6bd8af6672
No description provided.