KEMBAR78
Update the signature and test of torch.hamming_window() by ILCSFNO · Pull Request #152682 · pytorch/pytorch · GitHub
Skip to content

Conversation

@ILCSFNO
Copy link
Contributor

@ILCSFNO ILCSFNO commented May 2, 2025

Fixes #146590

@pytorch-bot
Copy link

pytorch-bot bot commented May 2, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/152682

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 12 Pending, 1 Unrelated Failure

As of commit d7268db with merge base a4fc051 (image):

NEW FAILURE - The following job has failed:

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@github-actions
Copy link
Contributor

github-actions bot commented May 2, 2025

Attention! native_functions.yaml was changed

If you are adding a new function or defaulted argument to native_functions.yaml, you cannot use it from pre-existing Python frontend code until our FC window passes (two weeks). Split your PR into two PRs, one which adds the new C++ functionality, and one that makes use of it from Python, and land them two weeks apart. See https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#forwards-compatibility-fc for more info.


Caused by:

@ILCSFNO
Copy link
Contributor Author

ILCSFNO commented May 2, 2025

@pytorchbot label "release notes: python_frontend"

@pytorch-bot pytorch-bot bot added the release notes: python_frontend python frontend release notes category label May 2, 2025
@ILCSFNO
Copy link
Contributor Author

ILCSFNO commented May 2, 2025

@mikaylagawarecki Could you please have a review? Thks!
Wondering the warning of native_functions.yaml and confusing about whether and how to change here:

// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ hamming_window ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Tensor hamming_window(
int64_t window_length,
std::optional<ScalarType> dtype,
std::optional<Layout> layout,
std::optional<Device> device,
std::optional<bool> pin_memory) {
return native::hamming_window(
window_length, /*periodic=*/true, dtype, layout, device, pin_memory);
}
Tensor hamming_window(
int64_t window_length,
bool periodic,
std::optional<ScalarType> dtype,
std::optional<Layout> layout,
std::optional<Device> device,
std::optional<bool> pin_memory) {
return native::hamming_window(
window_length,
periodic,
/*alpha=*/0.54,
dtype,
layout,
device,
pin_memory);
}
Tensor hamming_window(
int64_t window_length,
bool periodic,
double alpha,
std::optional<ScalarType> dtype,
std::optional<Layout> layout,
std::optional<Device> device,
std::optional<bool> pin_memory) {
return native::hamming_window(
window_length,
periodic,
alpha,
/*beta=*/0.46,
dtype,
layout,
device,
pin_memory);
}
Tensor hamming_window(
int64_t window_length,
bool periodic,
double alpha,
double beta,
std::optional<ScalarType> dtype_opt,
std::optional<Layout> layout,
std::optional<Device> device,
std::optional<bool> pin_memory) {
// See [Note: hacky wrapper removal for TensorOptions]
ScalarType dtype = c10::dtype_or_default(dtype_opt);
TensorOptions options =
TensorOptions().dtype(dtype).layout(layout).device(device).pinned_memory(
pin_memory);
window_function_checks("hamming_window", options, window_length);
if (window_length == 0) {
return at::empty({0}, options);
}
if (window_length == 1) {
return native::ones({1}, dtype, layout, device, pin_memory);
}
if (periodic) {
window_length += 1;
}
auto window =
native::arange(window_length, dtype, layout, device, pin_memory);
window.mul_(c10::pi<double> * 2. / static_cast<double>(window_length - 1))
.cos_()
.mul_(-beta)
.add_(alpha);
return periodic ? window.narrow(0, 0, window_length - 1) : std::move(window);
}

for that some of the optional flag especially of the exact num, thanks!

Copy link
Contributor

@mikaylagawarecki mikaylagawarecki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing native_functions.yaml does not seem like the right change to me

I think it's possible that we might need a python wrapper to work around the arg parsing issues

cc @albanD

@mikaylagawarecki mikaylagawarecki added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label May 2, 2025
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is most likely BC-breaking indeed.
Should we just update the doc to fix the issue?

@ILCSFNO
Copy link
Contributor Author

ILCSFNO commented May 3, 2025

This is most likely BC-breaking indeed. Should we just update the doc to fix the issue?

Thanks! I searched just now and find that maybe the doc is alright, for that it only shows one usage of hamming_window with all params, no expanded usages are shown below. The only one usage meet with the change I made in native_functions.yaml.

pytorch/torch/_torch_docs.py

Lines 12424 to 12425 in 84aa098

hamming_window(window_length, periodic=True, alpha=0.54, beta=0.46, *, dtype=None, \
layout=torch.strided, device=None, requires_grad=False) -> Tensor

@pytorch-bot pytorch-bot bot added the topic: not user facing topic category label May 23, 2025
@ILCSFNO ILCSFNO requested a review from albanD May 23, 2025 11:51
@ILCSFNO
Copy link
Contributor Author

ILCSFNO commented May 23, 2025

@albanD Could you please have a review? Fixed the doc to meet the signature shown in codes. Thanks.

@ILCSFNO
Copy link
Contributor Author

ILCSFNO commented Jul 2, 2025

@albanD Could you please have a review? Fixed the doc to meet the signature shown in codes. Thanks.

cc @albanD

@albanD
Copy link
Collaborator

albanD commented Jul 2, 2025

The docs build is failing, could you fix the errors it reported:

/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/__init__.py:docstring of torch._VariableFunctionsClass.hamming_window:44: WARNING: Explicit markup ends without a blank line; unexpected unindent.
/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/__init__.py:docstring of torch._VariableFunctionsClass.hamming_window:77: WARNING: Explicit markup ends without a blank line; unexpected unindent.
/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/__init__.py:docstring of torch._VariableFunctionsClass.hamming_window:112: WARNING: Explicit markup ends without a blank line; unexpected unindent.
=========================

@ILCSFNO
Copy link
Contributor Author

ILCSFNO commented Jul 2, 2025

The docs build is failing, could you fix the errors it reported:

/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/__init__.py:docstring of torch._VariableFunctionsClass.hamming_window:44: WARNING: Explicit markup ends without a blank line; unexpected unindent.
/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/__init__.py:docstring of torch._VariableFunctionsClass.hamming_window:77: WARNING: Explicit markup ends without a blank line; unexpected unindent.
/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/__init__.py:docstring of torch._VariableFunctionsClass.hamming_window:112: WARNING: Explicit markup ends without a blank line; unexpected unindent.
=========================

@albanD Actually I understand what the warning means, but indeed no findings when I search for this mismatch. Could you please point it out if possible? Thks a lot!

@albanD
Copy link
Collaborator

albanD commented Jul 2, 2025

I didn't look into it. The line number shared here is usually from the beginning of the docstring.
In this case, I guess a missing empty line?

@ILCSFNO
Copy link
Contributor Author

ILCSFNO commented Jul 3, 2025

@albanD Yes, I accept with you! This is caused by the lack of a blank line between an indented explicit markup block and more unindented text, e.g.

.. testcode:: ExHist

   print "This is a test"
 Output:                         <------------- There should be a blank line above this

 .. testoutput:: ExHist

Something shown here.

And I build locally trying to find the mismatch, but I succeed when I run USE_CUDA=0 python setup.py develop.
Furthermore, I wandering Line 44, 77, 112 is referred to which line but can't.

My building result of Line 15058-15202 of /torch/_C/_VariableFunctions.pyi is shown below:

def hamming_window(
    window_length: _int,
    periodic: _bool | None = True,
    alpha: _float | None = 0.54,
    beta: _float | None = 0.46,
    *,
    dtype: _dtype | None = None,
    layout: _layout | None = None,
    device: DeviceLikeType | None = None,
    pin_memory: _bool | None = False,
    requires_grad: _bool | None = False,
) -> Tensor: 
    r"""
    hamming_window(window_length, *, dtype=None, layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor
    
    Hamming window function.
    
    .. math::
        w[n] = \alpha - \beta\ \cos \left( \frac{2 \pi n}{N - 1} \right),
    
    where :math:`N` is the full window size.
    
    The input :attr:`window_length` is a positive integer controlling the
    returned window size. :attr:`periodic` flag determines whether the returned
    window trims off the last duplicate value from the symmetric window and is
    ready to be used as a periodic window with functions like
    :meth:`torch.stft`. Therefore, if :attr:`periodic` is true, the :math:`N` in
    above formula is in fact :math:`\text{window\_length} + 1`. Also, we always have
    ``torch.hamming_window(L, periodic=True)`` equal to
    ``torch.hamming_window(L + 1, periodic=False)[:-1])``.
    
    .. note::
        If :attr:`window_length` :math:`=1`, the returned window contains a single value 1.
    
    .. note::
        This is a generalized version of :meth:`torch.hann_window`.
    
    Arguments:
        window_length (int): the size of returned window
    
    Keyword args:
        dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
            Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`). Only floating point types are supported.
        layout (:class:`torch.layout`, optional): the desired layout of returned window tensor. Only
              ``torch.strided`` (dense layout) is supported.
        device (:class:`torch.device`, optional): the desired device of returned tensor.
            Default: if ``None``, uses the current device for the default tensor type
            (see :func:`torch.set_default_device`). :attr:`device` will be the CPU
            for CPU tensor types and the current CUDA device for CUDA tensor types.
        pin_memory (bool, optional): If set, returned tensor would be allocated in
            the pinned memory. Works only for CPU tensors. Default: ``False``.
        requires_grad (bool, optional): If autograd should record operations on the
            returned tensor. Default: ``False``.
    
    Returns:
        Tensor: A 1-D tensor of size :math:`(\text{window\_length},)` containing the window.
    
    .. function:: hamming_window(window_length, periodic, *, dtype=None, \
    layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor
       :noindex:
    
    Hamming window function with periodic specified.
    
    Arguments:
        window_length (int): the size of returned window
        periodic (bool): If True, returns a window to be used as periodic
            function. If False, return a symmetric window.
    
    Keyword args:
        dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
            Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`). Only floating point types are supported.
        layout (:class:`torch.layout`, optional): the desired layout of returned window tensor. Only
              ``torch.strided`` (dense layout) is supported.
        device (:class:`torch.device`, optional): the desired device of returned tensor.
            Default: if ``None``, uses the current device for the default tensor type
            (see :func:`torch.set_default_device`). :attr:`device` will be the CPU
            for CPU tensor types and the current CUDA device for CUDA tensor types.
        pin_memory (bool, optional): If set, returned tensor would be allocated in
            the pinned memory. Works only for CPU tensors. Default: ``False``.
        requires_grad (bool, optional): If autograd should record operations on the
            returned tensor. Default: ``False``.
    
    Returns:
        Tensor: A 1-D tensor of size :math:`(\text{window\_length},)` containing the window.
    
    .. function:: hamming_window(window_length, periodic, float alpha, *, dtype=None, \
    layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor
       :noindex:
    
    Hamming window function with periodic and alpha specified.
    
    Arguments:
        window_length (int): the size of returned window
        periodic (bool): If True, returns a window to be used as periodic
            function. If False, return a symmetric window.
        alpha (float): The coefficient :math:`\alpha` in the equation above
    
    Keyword args:
        dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
            Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`). Only floating point types are supported.
        layout (:class:`torch.layout`, optional): the desired layout of returned window tensor. Only
              ``torch.strided`` (dense layout) is supported.
        device (:class:`torch.device`, optional): the desired device of returned tensor.
            Default: if ``None``, uses the current device for the default tensor type
            (see :func:`torch.set_default_device`). :attr:`device` will be the CPU
            for CPU tensor types and the current CUDA device for CUDA tensor types.
        pin_memory (bool, optional): If set, returned tensor would be allocated in
            the pinned memory. Works only for CPU tensors. Default: ``False``.
        requires_grad (bool, optional): If autograd should record operations on the
            returned tensor. Default: ``False``.
    
    Returns:
        Tensor: A 1-D tensor of size :math:`(\text{window\_length},)` containing the window.
    
    .. function:: hamming_window(window_length, periodic, float alpha, float beta, *, dtype=None, \
    layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor
       :noindex:
    
    Hamming window function with periodic, alpha and beta specified.
    
    Arguments:
        window_length (int): the size of returned window
        periodic (bool): If True, returns a window to be used as periodic
            function. If False, return a symmetric window.
        alpha (float): The coefficient :math:`\alpha` in the equation above
        beta (float): The coefficient :math:`\beta` in the equation above
    
    Keyword args:
        dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
            Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`). Only floating point types are supported.
        layout (:class:`torch.layout`, optional): the desired layout of returned window tensor. Only
              ``torch.strided`` (dense layout) is supported.
        device (:class:`torch.device`, optional): the desired device of returned tensor.
            Default: if ``None``, uses the current device for the default tensor type
            (see :func:`torch.set_default_device`). :attr:`device` will be the CPU
            for CPU tensor types and the current CUDA device for CUDA tensor types.
        pin_memory (bool, optional): If set, returned tensor would be allocated in
            the pinned memory. Works only for CPU tensors. Default: ``False``.
        requires_grad (bool, optional): If autograd should record operations on the
            returned tensor. Default: ``False``.
    
    Returns:
        Tensor: A 1-D tensor of size :math:`(\text{window\_length},)` containing the window.
    """
    ...

Comment on lines 12532 to 12624
.. function:: hamming_window(window_length, periodic, float alpha, *, dtype=None, \
layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor
:noindex:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My guess is that these lines are the problematic ones. Either the line continuation or noindex position.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me see, what if break it into one line and noindex with 4 spaces? Though I notice that some other funcs remains 3 spaces.

.. function:: hamming_window(window_length, periodic, float alpha, *, dtype=None, layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor
    :noindex:

I'll have a try and see what changes will happen on the error reported.

Copy link
Contributor Author

@ILCSFNO ILCSFNO Jul 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes seen, you're right! Actually one of them:

Changed to fix the line length to see what will happen then.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Guess that maybe .. function is seen as indent? No other similar references for that not found a sentence in other funcs as long as the description:

.. function:: hamming_window(window_length, periodic, float alpha, *, dtype=None, layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@svekars any hint on this one? :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have to use .. function::?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think that from above, decision is to just update the doc to fix the issue instead of fix the code. And from here, there're 4 types of usage in hamming_window:

// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ hamming_window ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Tensor hamming_window(
int64_t window_length,
std::optional<ScalarType> dtype,
std::optional<Layout> layout,
std::optional<Device> device,
std::optional<bool> pin_memory) {
return native::hamming_window(
window_length, /*periodic=*/true, dtype, layout, device, pin_memory);
}
Tensor hamming_window(
int64_t window_length,
bool periodic,
std::optional<ScalarType> dtype,
std::optional<Layout> layout,
std::optional<Device> device,
std::optional<bool> pin_memory) {
return native::hamming_window(
window_length,
periodic,
/*alpha=*/0.54,
dtype,
layout,
device,
pin_memory);
}
Tensor hamming_window(
int64_t window_length,
bool periodic,
double alpha,
std::optional<ScalarType> dtype,
std::optional<Layout> layout,
std::optional<Device> device,
std::optional<bool> pin_memory) {
return native::hamming_window(
window_length,
periodic,
alpha,
/*beta=*/0.46,
dtype,
layout,
device,
pin_memory);
}
Tensor hamming_window(
int64_t window_length,
bool periodic,
double alpha,
double beta,
std::optional<ScalarType> dtype_opt,
std::optional<Layout> layout,
std::optional<Device> device,
std::optional<bool> pin_memory) {
// See [Note: hacky wrapper removal for TensorOptions]
ScalarType dtype = c10::dtype_or_default(dtype_opt);
TensorOptions options =
TensorOptions().dtype(dtype).layout(layout).device(device).pinned_memory(
pin_memory);
window_function_checks("hamming_window", options, window_length);
if (window_length == 0) {
return at::empty({0}, options);
}
if (window_length == 1) {
return native::ones({1}, dtype, layout, device, pin_memory);
}
if (periodic) {
window_length += 1;
}
auto window =
native::arange(window_length, dtype, layout, device, pin_memory);
window.mul_(c10::pi<double> * 2. / static_cast<double>(window_length - 1))
.cos_()
.mul_(-beta)
.add_(alpha);
return periodic ? window.narrow(0, 0, window_length - 1) : std::move(window);
}

@ILCSFNO
Copy link
Contributor Author

ILCSFNO commented Jul 18, 2025

@albanD @svekars How about this one version?

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!
What did you change

@ILCSFNO
Copy link
Contributor Author

ILCSFNO commented Jul 18, 2025

Guess that maybe .. function is seen as indent? No other similar references for that not found a sentence in other funcs as long as the description:

.. function:: hamming_window(window_length, periodic, float alpha, *, dtype=None, layout=None, device=None, pin_memory=False, requires_grad=False) -> Tensor

Just like this, from:

.. function:: hamming_window(window_length, periodic, float alpha, *, dtype=None, layout=None, device=None, \
pin_memory=False, requires_grad=False) -> Tensor
   :noindex:

to:

.. function:: hamming_window(window_length, periodic, float alpha, *, dtype=None, layout=None, device=None, \
    pin_memory=False, requires_grad=False) -> Tensor
   :noindex:

A try which makes sense.

@ILCSFNO
Copy link
Contributor Author

ILCSFNO commented Jul 21, 2025

cc @albanD So shell we merge? Or just wait for other checks? Thanks a lot!

@albanD
Copy link
Collaborator

albanD commented Jul 21, 2025

@pytorchbot merge

Merging!
FYI once the PR is approved, you can ask the bot to merge by yourself !

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Jul 21, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@ILCSFNO
Copy link
Contributor Author

ILCSFNO commented Jul 21, 2025

Alright, thanks! For that I note:

Merging is blocked
You're not authorized to push to this branch. Visit https://docs.github.com/repositories/configuring-branches-and-merges-in-your-repository/managing-protected-branches/about-protected-branches for more information.

Thinking that it should be done by Collaborator, I'll try it if met next time. Thanks a lot!

@pytorchmergebot
Copy link
Collaborator

Successfully rebased patch-8 onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via git checkout patch-8 && git pull --rebase)

@pytorch-bot pytorch-bot bot removed the ciflow/trunk Trigger trunk jobs on your pull request label Aug 1, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 3 mandatory check(s) failed. The first few are:

Dig deeper by viewing the failures on hud

Details for Dev Infra team Raised by workflow job

Failing merge rule: Core Maintainers

@albanD
Copy link
Collaborator

albanD commented Aug 1, 2025

@pytorchbot merge

@pytorch-bot
Copy link

pytorch-bot bot commented Aug 1, 2025

Pull workflow has not been scheduled for the PR yet. It could be because author doesn't have permissions to run those or skip-checks keywords were added to PR/commits, aborting merge. Please get/give approval for the workflows and/or remove skip ci decorators before next merge attempt. If you think this is a mistake, please contact PyTorch Dev Infra.

@albanD
Copy link
Collaborator

albanD commented Aug 4, 2025

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Aug 4, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 mandatory check(s) failed. The first few are:

Dig deeper by viewing the failures on hud

Details for Dev Infra team Raised by workflow job

Failing merge rule: Core Maintainers

@albanD
Copy link
Collaborator

albanD commented Aug 4, 2025

@pytorchbot merge -h

@pytorch-bot
Copy link

pytorch-bot bot commented Aug 4, 2025

PyTorchBot Help

usage: @pytorchbot [-h] {merge,revert,rebase,label,drci,cherry-pick} ...

In order to invoke the bot on your PR, include a line that starts with
@pytorchbot anywhere in a comment. That line will form the command; no
multi-line commands are allowed. Some commands may be used on issues as specified below.

Example:
    Some extra context, blah blah, wow this PR looks awesome

    @pytorchbot merge

optional arguments:
  -h, --help            Show this help message and exit.

command:
  {merge,revert,rebase,label,drci,cherry-pick}
    merge               Merge a PR
    revert              Revert a PR
    rebase              Rebase a PR
    label               Add label to a PR
    drci                Update Dr. CI
    cherry-pick         Cherry pick a PR onto a release branch

Merge

usage: @pytorchbot merge [-f MESSAGE | -i] [-ic] [-r [{viable/strict,main}]]

Merge an accepted PR, subject to the rules in .github/merge_rules.json.
By default, this will wait for all required checks (lint, pull) to succeed before merging.

optional arguments:
  -f MESSAGE, --force MESSAGE
                        Merge without checking anything. This requires a reason for auditting purpose, for example:
                        @pytorchbot merge -f 'Minor update to fix lint. Expecting all PR tests to pass'
                        
                        Please use `-f` as last resort, prefer `--ignore-current` to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.
  -i, --ignore-current  Merge while ignoring the currently failing jobs.  Behaves like -f if there are no pending jobs.
  -ic                   Old flag for --ignore-current. Deprecated in favor of -i.
  -r [{viable/strict,main}], --rebase [{viable/strict,main}]
                        Rebase the PR to re run checks before merging.  Accepts viable/strict or main as branch options and will default to viable/strict if not specified.

Revert

usage: @pytorchbot revert -m MESSAGE -c
                          {nosignal,ignoredsignal,landrace,weird,ghfirst}

Revert a merged PR. This requires that you are a Meta employee.

Example:
  @pytorchbot revert -m="This is breaking tests on trunk. hud.pytorch.org/" -c=nosignal

optional arguments:
  -m MESSAGE, --message MESSAGE
                        The reason you are reverting, will be put in the commit message. Must be longer than 3 words.
  -c {nosignal,ignoredsignal,landrace,weird,ghfirst}, --classification {nosignal,ignoredsignal,landrace,weird,ghfirst}
                        A machine-friendly classification of the revert reason.

Rebase

usage: @pytorchbot rebase [-s | -b BRANCH]

Rebase a PR. Rebasing defaults to the stable viable/strict branch of pytorch.
Repeat contributor may use this command to rebase their PR.

optional arguments:
  -s, --stable          [DEPRECATED] Rebase onto viable/strict
  -b BRANCH, --branch BRANCH
                        Branch you would like to rebase to

Label

usage: @pytorchbot label labels [labels ...]

Adds label to a PR or Issue [Can be used on Issues]

positional arguments:
  labels  Labels to add to given Pull Request or Issue [Can be used on Issues]

Dr CI

usage: @pytorchbot drci 

Update Dr. CI. Updates the Dr. CI comment on the PR in case it's gotten out of sync with actual CI results.

cherry-pick

usage: @pytorchbot cherry-pick --onto ONTO [--fixes FIXES] -c
                               {regression,critical,fixnewfeature,docs,release}

Cherry pick a pull request onto a release branch for inclusion in a release

optional arguments:
  --onto ONTO, --into ONTO
                        Branch you would like to cherry pick onto (Example: release/2.1)
  --fixes FIXES         Link to the issue that your PR fixes (Example: https://github.com/pytorch/pytorch/issues/110666)
  -c {regression,critical,fixnewfeature,docs,release}, --classification {regression,critical,fixnewfeature,docs,release}
                        A machine-friendly classification of the cherry-pick reason.

@albanD
Copy link
Collaborator

albanD commented Aug 4, 2025

@pytorchbot merge -f "Doc build is good and this doesn't touch anything else"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@ILCSFNO
Copy link
Contributor Author

ILCSFNO commented Aug 6, 2025

Thanks a lot!

@ILCSFNO ILCSFNO deleted the patch-8 branch August 6, 2025 06:24
markc-614 pushed a commit to markc-614/pytorch that referenced this pull request Sep 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged open source release notes: python_frontend python frontend release notes category topic: not user facing topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Signature should be extended for torch.hamming_window()

6 participants