-
Notifications
You must be signed in to change notification settings - Fork 25.7k
add torch.meshgrid() OpInfo #62720
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add torch.meshgrid() OpInfo #62720
Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit aa6973e (more details on the Dr. CI page):
ci.pytorch.org: 1 failedThis comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
c885221 to
4e8e658
Compare
e5f5daa to
897ba90
Compare
c3672cf to
3d1b67a
Compare
3d1b67a to
4ff55ce
Compare
|
|
||
|
|
||
| def sample_inputs_meshgrid( | ||
| calling_convention: str) -> Callable[[OpInfo, torch.device, torch.dtype, bool], List[SampleInput]]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding all this type information looks time-consuming, did the system make you do it? We should probably stop linting this file if so
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't remember if it made me do it. It seems unlikely given that most of the file is not type annotated.
|
|
||
| def sample_inputs(op_info: OpInfo, device: torch.device, dtype: torch.dtype, | ||
| requires_grad: bool) -> List[SampleInput]: | ||
| Z = () # note: torch.rand(()) yields a random 0D tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Z seems like a fine addition to the S M and L defined at the global scope
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done. i have some concern that Z is not necessarily interchangeable with the other sizes, since it represents a full shape, whereas sometimes S et al. are used as the length of a single dimension.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair point, but from that perspective I'd suggest simply not defining it and always writing (), then, because that concern is also a local concern
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, it is a local concern. S here is also a local, not the value from above. Do you think that having two local constants S and Z here would be inappropriate?
S = torch.Size([3])
Z = torch.Size([])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer not shadowing the global for readability, especially not shadowing it with a very similar value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed to locals named SCALAR and VECTOR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @dagitses! Overall looks good. I made some notes about possibly simplifying the sample inputs.
95bbdc6 to
65ddedf
Compare
| 'Unsupported variant, must be one of {"variadic", "list"}. ' | ||
| f'Got "{variant}".') | ||
|
|
||
| SCALAR = torch.Size([]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure these are really improving readability here but I guess it's OK
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool!
65ddedf to
5c980fe
Compare
|
@dagitses has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
1 similar comment
|
@dagitses has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
5c980fe to
fb4ef92
Compare
|
@dagitses has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
fb4ef92 to
aa6973e
Compare
|
@dagitses has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Fixes #62719