-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Update the signature and test of torch.hamming_window() #152682
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/152682
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 12 Pending, 1 Unrelated FailureAs of commit d7268db with merge base a4fc051 ( NEW FAILURE - The following job has failed:
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Attention! native_functions.yaml was changedIf you are adding a new function or defaulted argument to native_functions.yaml, you cannot use it from pre-existing Python frontend code until our FC window passes (two weeks). Split your PR into two PRs, one which adds the new C++ functionality, and one that makes use of it from Python, and land them two weeks apart. See https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#forwards-compatibility-fc for more info. Caused by: |
|
@pytorchbot label "release notes: python_frontend" |
|
@mikaylagawarecki Could you please have a review? Thks! pytorch/aten/src/ATen/native/TensorFactories.cpp Lines 1850 to 1930 in 50d4698
for that some of the optional flag especially of the exact num, thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing native_functions.yaml does not seem like the right change to me
I think it's possible that we might need a python wrapper to work around the arg parsing issues
cc @albanD
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is most likely BC-breaking indeed.
Should we just update the doc to fix the issue?
Thanks! I searched just now and find that maybe the doc is alright, for that it only shows one usage of hamming_window with all params, no expanded usages are shown below. The only one usage meet with the change I made in native_functions.yaml. Lines 12424 to 12425 in 84aa098
|
|
@albanD Could you please have a review? Fixed the doc to meet the signature shown in codes. Thanks. |
|
The docs build is failing, could you fix the errors it reported: |
@albanD Actually I understand what the warning means, but indeed no findings when I search for this mismatch. Could you please point it out if possible? Thks a lot! |
|
I didn't look into it. The line number shared here is usually from the beginning of the docstring. |
|
@albanD Yes, I accept with you! This is caused by the lack of a blank line between an indented explicit markup block and more unindented text, e.g. Something shown here. And I build locally trying to find the mismatch, but I succeed when I run My building result of def hamming_window(
window_length: _int,
periodic: _bool | None = True,
alpha: _float | None = 0.54,
beta: _float | None = 0.46,
*,
dtype: _dtype | None = None,
layout: _layout | None = None,
device: DeviceLikeType | None = None,
pin_memory: _bool | None = False,
requires_grad: _bool | None = False,
) -> Tensor:
r"""
hamming_window(window_length, *, dtype=None, layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor
Hamming window function.
.. math::
w[n] = \alpha - \beta\ \cos \left( \frac{2 \pi n}{N - 1} \right),
where :math:`N` is the full window size.
The input :attr:`window_length` is a positive integer controlling the
returned window size. :attr:`periodic` flag determines whether the returned
window trims off the last duplicate value from the symmetric window and is
ready to be used as a periodic window with functions like
:meth:`torch.stft`. Therefore, if :attr:`periodic` is true, the :math:`N` in
above formula is in fact :math:`\text{window\_length} + 1`. Also, we always have
``torch.hamming_window(L, periodic=True)`` equal to
``torch.hamming_window(L + 1, periodic=False)[:-1])``.
.. note::
If :attr:`window_length` :math:`=1`, the returned window contains a single value 1.
.. note::
This is a generalized version of :meth:`torch.hann_window`.
Arguments:
window_length (int): the size of returned window
Keyword args:
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`). Only floating point types are supported.
layout (:class:`torch.layout`, optional): the desired layout of returned window tensor. Only
``torch.strided`` (dense layout) is supported.
device (:class:`torch.device`, optional): the desired device of returned tensor.
Default: if ``None``, uses the current device for the default tensor type
(see :func:`torch.set_default_device`). :attr:`device` will be the CPU
for CPU tensor types and the current CUDA device for CUDA tensor types.
pin_memory (bool, optional): If set, returned tensor would be allocated in
the pinned memory. Works only for CPU tensors. Default: ``False``.
requires_grad (bool, optional): If autograd should record operations on the
returned tensor. Default: ``False``.
Returns:
Tensor: A 1-D tensor of size :math:`(\text{window\_length},)` containing the window.
.. function:: hamming_window(window_length, periodic, *, dtype=None, \
layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor
:noindex:
Hamming window function with periodic specified.
Arguments:
window_length (int): the size of returned window
periodic (bool): If True, returns a window to be used as periodic
function. If False, return a symmetric window.
Keyword args:
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`). Only floating point types are supported.
layout (:class:`torch.layout`, optional): the desired layout of returned window tensor. Only
``torch.strided`` (dense layout) is supported.
device (:class:`torch.device`, optional): the desired device of returned tensor.
Default: if ``None``, uses the current device for the default tensor type
(see :func:`torch.set_default_device`). :attr:`device` will be the CPU
for CPU tensor types and the current CUDA device for CUDA tensor types.
pin_memory (bool, optional): If set, returned tensor would be allocated in
the pinned memory. Works only for CPU tensors. Default: ``False``.
requires_grad (bool, optional): If autograd should record operations on the
returned tensor. Default: ``False``.
Returns:
Tensor: A 1-D tensor of size :math:`(\text{window\_length},)` containing the window.
.. function:: hamming_window(window_length, periodic, float alpha, *, dtype=None, \
layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor
:noindex:
Hamming window function with periodic and alpha specified.
Arguments:
window_length (int): the size of returned window
periodic (bool): If True, returns a window to be used as periodic
function. If False, return a symmetric window.
alpha (float): The coefficient :math:`\alpha` in the equation above
Keyword args:
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`). Only floating point types are supported.
layout (:class:`torch.layout`, optional): the desired layout of returned window tensor. Only
``torch.strided`` (dense layout) is supported.
device (:class:`torch.device`, optional): the desired device of returned tensor.
Default: if ``None``, uses the current device for the default tensor type
(see :func:`torch.set_default_device`). :attr:`device` will be the CPU
for CPU tensor types and the current CUDA device for CUDA tensor types.
pin_memory (bool, optional): If set, returned tensor would be allocated in
the pinned memory. Works only for CPU tensors. Default: ``False``.
requires_grad (bool, optional): If autograd should record operations on the
returned tensor. Default: ``False``.
Returns:
Tensor: A 1-D tensor of size :math:`(\text{window\_length},)` containing the window.
.. function:: hamming_window(window_length, periodic, float alpha, float beta, *, dtype=None, \
layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor
:noindex:
Hamming window function with periodic, alpha and beta specified.
Arguments:
window_length (int): the size of returned window
periodic (bool): If True, returns a window to be used as periodic
function. If False, return a symmetric window.
alpha (float): The coefficient :math:`\alpha` in the equation above
beta (float): The coefficient :math:`\beta` in the equation above
Keyword args:
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`). Only floating point types are supported.
layout (:class:`torch.layout`, optional): the desired layout of returned window tensor. Only
``torch.strided`` (dense layout) is supported.
device (:class:`torch.device`, optional): the desired device of returned tensor.
Default: if ``None``, uses the current device for the default tensor type
(see :func:`torch.set_default_device`). :attr:`device` will be the CPU
for CPU tensor types and the current CUDA device for CUDA tensor types.
pin_memory (bool, optional): If set, returned tensor would be allocated in
the pinned memory. Works only for CPU tensors. Default: ``False``.
requires_grad (bool, optional): If autograd should record operations on the
returned tensor. Default: ``False``.
Returns:
Tensor: A 1-D tensor of size :math:`(\text{window\_length},)` containing the window.
"""
... |
torch/_torch_docs.py
Outdated
| .. function:: hamming_window(window_length, periodic, float alpha, *, dtype=None, \ | ||
| layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor | ||
| :noindex: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My guess is that these lines are the problematic ones. Either the line continuation or noindex position.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me see, what if break it into one line and noindex with 4 spaces? Though I notice that some other funcs remains 3 spaces.
.. function:: hamming_window(window_length, periodic, float alpha, *, dtype=None, layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor
:noindex:
I'll have a try and see what changes will happen on the error reported.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes seen, you're right! Actually one of them:
- Pre-version: https://github.com/pytorch/pytorch/actions/runs/16016452841/job/45224096063 (Fail)
- New-Version: https://github.com/pytorch/pytorch/actions/runs/16173999664/job/45657856489 (Succeed)
Changed to fix the line length to see what will happen then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guess that maybe .. function is seen as indent? No other similar references for that not found a sentence in other funcs as long as the description:
.. function:: hamming_window(window_length, periodic, float alpha, *, dtype=None, layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@svekars any hint on this one? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have to use .. function::?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think that from above, decision is to just update the doc to fix the issue instead of fix the code. And from here, there're 4 types of usage in hamming_window:
pytorch/aten/src/ATen/native/TensorFactories.cpp
Lines 1872 to 1952 in 1d0f45d
| // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ hamming_window ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
| Tensor hamming_window( | |
| int64_t window_length, | |
| std::optional<ScalarType> dtype, | |
| std::optional<Layout> layout, | |
| std::optional<Device> device, | |
| std::optional<bool> pin_memory) { | |
| return native::hamming_window( | |
| window_length, /*periodic=*/true, dtype, layout, device, pin_memory); | |
| } | |
| Tensor hamming_window( | |
| int64_t window_length, | |
| bool periodic, | |
| std::optional<ScalarType> dtype, | |
| std::optional<Layout> layout, | |
| std::optional<Device> device, | |
| std::optional<bool> pin_memory) { | |
| return native::hamming_window( | |
| window_length, | |
| periodic, | |
| /*alpha=*/0.54, | |
| dtype, | |
| layout, | |
| device, | |
| pin_memory); | |
| } | |
| Tensor hamming_window( | |
| int64_t window_length, | |
| bool periodic, | |
| double alpha, | |
| std::optional<ScalarType> dtype, | |
| std::optional<Layout> layout, | |
| std::optional<Device> device, | |
| std::optional<bool> pin_memory) { | |
| return native::hamming_window( | |
| window_length, | |
| periodic, | |
| alpha, | |
| /*beta=*/0.46, | |
| dtype, | |
| layout, | |
| device, | |
| pin_memory); | |
| } | |
| Tensor hamming_window( | |
| int64_t window_length, | |
| bool periodic, | |
| double alpha, | |
| double beta, | |
| std::optional<ScalarType> dtype_opt, | |
| std::optional<Layout> layout, | |
| std::optional<Device> device, | |
| std::optional<bool> pin_memory) { | |
| // See [Note: hacky wrapper removal for TensorOptions] | |
| ScalarType dtype = c10::dtype_or_default(dtype_opt); | |
| TensorOptions options = | |
| TensorOptions().dtype(dtype).layout(layout).device(device).pinned_memory( | |
| pin_memory); | |
| window_function_checks("hamming_window", options, window_length); | |
| if (window_length == 0) { | |
| return at::empty({0}, options); | |
| } | |
| if (window_length == 1) { | |
| return native::ones({1}, dtype, layout, device, pin_memory); | |
| } | |
| if (periodic) { | |
| window_length += 1; | |
| } | |
| auto window = | |
| native::arange(window_length, dtype, layout, device, pin_memory); | |
| window.mul_(c10::pi<double> * 2. / static_cast<double>(window_length - 1)) | |
| .cos_() | |
| .mul_(-beta) | |
| .add_(alpha); | |
| return periodic ? window.narrow(0, 0, window_length - 1) : std::move(window); | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!
What did you change
Just like this, from: .. function:: hamming_window(window_length, periodic, float alpha, *, dtype=None, layout=None, device=None, \
pin_memory=False, requires_grad=False) -> Tensor
:noindex:to: .. function:: hamming_window(window_length, periodic, float alpha, *, dtype=None, layout=None, device=None, \
pin_memory=False, requires_grad=False) -> Tensor
:noindex:A try which makes sense. |
|
cc @albanD So shell we merge? Or just wait for other checks? Thanks a lot! |
|
@pytorchbot merge Merging! |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
Alright, thanks! For that I note: Thinking that it should be done by |
|
Successfully rebased |
d65f559 to
d7268db
Compare
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 3 mandatory check(s) failed. The first few are: Dig deeper by viewing the failures on hud |
|
@pytorchbot merge |
|
Pull workflow has not been scheduled for the PR yet. It could be because author doesn't have permissions to run those or skip-checks keywords were added to PR/commits, aborting merge. Please get/give approval for the workflows and/or remove skip ci decorators before next merge attempt. If you think this is a mistake, please contact PyTorch Dev Infra. |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 mandatory check(s) failed. The first few are: Dig deeper by viewing the failures on hud |
|
@pytorchbot merge -h |
PyTorchBot HelpMergeRevertRebaseLabelDr CIcherry-pick |
|
@pytorchbot merge -f "Doc build is good and this doesn't touch anything else" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
Thanks a lot! |
Fixes pytorch#146590 Pull Request resolved: pytorch#152682 Approved by: https://github.com/albanD
Fixes #146590