-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Improve custom function docs #60312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve custom function docs #60312
Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit b3d6c9c (more details on the Dr. CI page):
🕵️ 3 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
| Job | Step | Action |
|---|---|---|
| Checkout PyTorch | 🔁 rerun | |
| Run clang-format | 🔁 rerun |
1 job timed out:
pytorch_linux_xenial_py3_clang7_asan_test1
ci.pytorch.org: 1 failed
This comment was automatically generated by Dr. CI (expand for details).
Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group.
e151ea3 to
cea0cb9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Read through the custom function part. Will do extending next.
d5f437e to
408a039
Compare
docs/source/notes/extending.rst
Outdated
| ^^^^^^^^^^^ | ||
| In general, implement a custom function if you want to perform computations in your model | ||
| that are not differentiable or rely on non-Pytorch libraries (e.g., NumPy), but | ||
| still wish for your operation to compose with built-in PyTorch operators and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what you mean by compose with built-in operators?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
something like "chain with other ops", but that's also kind of implied by "work with the autograd engine" so I'm just removing it for now
docs/source/notes/extending.rst
Outdated
| custom Module. See the section below for more information on extending :mod:`torch.nn`. | ||
|
|
||
| If you'd like to alter the gradients during the backward pass or perform a side | ||
| effect, consider using function or module hooks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
capitalization for Module
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually maybe link to the hooks here.
Also I think function -> tensor
docs/source/notes/extending.rst
Outdated
| comparing the value element-wise with the Jacobian computed numerically using | ||
| finite-differencing. | ||
|
|
||
| .. note:: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this note still necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merging it into the section about mark_non_differentiable
beff35b to
116ace3
Compare
docs/source/notes/extending.rst
Outdated
| before the call, and their use will be registered in the graph. Note that this | ||
| logic won't traverse lists/dicts/any other data structures and will only | ||
| consider :class:`Tensor` s that are direct arguments to the call. You can | ||
| consider :class:`Tensor`s that are direct arguments to the call. You can |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note to self: check if the space between the s and class is necessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the rendered doc https://docs-preview.pytorch.org/60312/notes/extending.html?highlight=custom%20function
looks like it is necessary. However, with the space it looks bad as well, see a couple lines above where we have ":class:Function s". So I'm okay with just removing the link in the plural case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also replacing Tensors -> tensors for consistency (unless we mean to refer to the actual class in which case it would need to be uppercase ":class:Tensor".
116ace3 to
72e4b59
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice update on the notes!
docs/source/notes/extending.rst
Outdated
| custom Module. See the section below for more information on extending :mod:`torch.nn`. | ||
|
|
||
| If you'd like to alter the gradients during the backward pass or perform a side | ||
| effect, consider using function or module hooks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually maybe link to the hooks here.
Also I think function -> tensor
| you should explicitly declare this by decorating backward with the | ||
| :func:`~function.once_differentiable`. With this decorator, attempts to | ||
| perform double backward through your function will produce an error. | ||
| See our double backward tutorial for more information on double backward. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it already up? Can we link it here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Soon :) pytorch/tutorials#1603 should be ready for review!
79df552 to
01224d2
Compare
01224d2 to
0337521
Compare
|
@albanD It has been a while (sorry!), but I have addressed your latest comments in this latest update, so it should be ready for another look! Still running into flaky CI issues with the double backward tutorial, so it will need a little longer for that to be landed. We can probably land a followup PR to this once that lands. The latest build is here: https://15333920-65600975-gh.circle-artifacts.com/0/docs/notes/extending.html |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
@soulitzer has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
|
@soulitzer has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
|
@soulitzer merged this pull request in 2f615f6. |
ctxmethods and make requirements of arguments more clearsave_for_backward,mark_dirty,mark_non_differentiable, andset_materialize_grads(BC-breaking?)torch.autograd.Functiondoc