KEMBAR78
OpInfo Ref: fmod, remainder by krshrimali · Pull Request #61527 · pytorch/pytorch · GitHub
Skip to content

Conversation

@krshrimali
Copy link
Contributor

@krshrimali krshrimali commented Jul 12, 2021

See #54261 for OpInfo tracker.

This PR:

cc: @mruberry

@krshrimali krshrimali added module: docs Related to our documentation, both in docs/ and docblocks module: testing Issues related to the torch.testing module (not tests) labels Jul 12, 2021
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jul 12, 2021

💊 CI failures summary and remediations

As of commit bef0a56 (more details on the Dr. CI page and at hud.pytorch.org/pr/61527):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


Preview docs built from this PR

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@krshrimali krshrimali marked this pull request as draft July 12, 2021 11:34
@krshrimali krshrimali marked this pull request as ready for review July 13, 2021 04:04
@krshrimali krshrimali requested a review from mruberry July 13, 2021 04:04
@krshrimali krshrimali mentioned this pull request Jul 13, 2021
@krshrimali krshrimali changed the title OpInfo Ref: fmod, remainder; Add complex64 to OpInfo ref testing OpInfo Ref: fmod, remainder Jul 14, 2021
@zou3519 zou3519 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jul 14, 2021
@krshrimali
Copy link
Contributor Author

Note: please ignore the unexpected long list of commits, I mistakenly pulled wrong branch into this branch on my system. :)

See :func:`torch.fmod` for how division by zero is handled.
.. note::
This is different to `math.remainder <https://docs.python.org/dev/library/math.html#math.remainder>`_
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"This is different to" -> "This is different from"

Can we explain HOW it's different succinctly? Like, what do std::remainder and math.remainder do differently? Also, why is PyTorch divergent from them? We typically try to follow what C++ and Python do.

What's even more interesting is that the Python Array API defines remainder. That definition may be underspecified. cc @rgommers

So I think there are still interesting questions of fact to be resolved with torch.remainder.

Copy link
Contributor Author

@krshrimali krshrimali Jul 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we explain HOW it's different succinctly? Like, what do std::remainder and math.remainder do differently?

std::remainder and math.remainder try to implement the remainder operation defined by IEEE 754 standard. Here is the relevant excerpt from Section 5.1 of IEEE 754-1985 Standard:

5.1 Arithmetic
When y != 0, the remainder r = x REM y is defined regardless of the rounding mode by the mathematical relation r = x - y
∗ n, where n is the integer nearest the exact value x/y; whenever |n-x/y| = 1/2, then n is even. Thus, the remainder is always exact. If r = 0, its sign shall be that of x. Precision control (4.3) shall not apply to the remainder operation.

While NumPy tries to implement the "%" operator of Python (modulus operator), here is what Python defines it as (link: https://docs.python.org/3.7/reference/expressions.html#binary-arithmetic-operations):

The % (modulo) operator yields the remainder from the division of the first argument by the second. The numeric
arguments are first converted to a common type. A zero right argument raises the ZeroDivisionError exception. The
arguments may be floating point numbers, e.g., 3.14%0.7 equals 0.34 (since 3.14 equals 4*0.7 + 0.34.) The modulo
operator always yields a result with the same sign as its second operand (or zero); the absolute value of the result is
strictly smaller than the absolute value of the second operand.

The difference that I can understand is:

Python (NumPy/PyTorch) IEEERemainder (C++ std::remainder, math.remainder)
Same sign as the second operand No same sign guaranteed except when dividing by zero (same sign as first operand or dividend)
Rounding used: floor Rounding used: closest integer

(I'll update this comment with the formulation as well)

Also, why is PyTorch divergent from them? We typically try to follow what C++ and Python do.

From above, I think PyTorch is divergent from IEEE-remainder while conforms to Python's % operator (as well as with np.remainder). Not sure if this justifies the decision, but I couldn't find anything else as a valid reason.

What's even more interesting is that the Python Array API defines remainder. That definition may be underspecified. cc @rgommers

I agree! But I'm not sure what's the scope for Python Array API (that how many users are going to refer to the documentation to understand what it does) - but in any case, if a link to the definition it follows can be given, it will help the case.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the intent there is to do the same as Python, NumPy and PyTorch. I'll add it to the list of things that need clarification.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Being clearer about the difference would be great. Additionally, I guess we should implement torch.mod and make torch.remainder an alias to it? NumPy has fmod, mod, and remainder. I think PyTorch just has fmod and remainder currently.

The documentation of torch.fmod will also have to be updated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have updated the doc for torch.remainder, here is the rendered version: https://14874396-65600975-gh.circle-artifacts.com/0/docs/generated/torch.remainder.html#torch.remainder. PTAL, @mruberry whenever you find time.

I agree that PyTorch should have a torch.mod function and making remainder alias to it. Please correct me if I'm wrong, implementation wise we don't need to change anything (w.r.t kernels etc.) - all we should do is add an entry to torch.mod and alias it to remainder? Or if we want to rename everything from remainder to mod and then add a torch.remainder entry which aliases to torch.mod? Just confirming before it's implemented, thanks!

The documentation of torch.fmod will also have to be updated.

AFAIK, torch.fmod is consistent with std::fmod since it itself calls std::fmod (for floating types): https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp#L813.

I'm not sure why does it need any updates? Is there anything I'm missing? The current documentation looks good to me: https://14874396-65600975-gh.circle-artifacts.com/0/docs/generated/torch.fmod.html (we updated recently in the OpInfo PR of fmod and remainder, to add difference between torch.remainder and torch.fmod).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are correct we would just add mod and, internally, mod could be an alias to remainder (or the other way around)

I think you're right that torch.fmod doesn't need any updates

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @mruberry for the clarification.

See :func:`torch.fmod` for how division by zero is handled.
.. note::
This is different from `math.remainder <https://docs.python.org/dev/library/math.html#math.remainder>`_
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slight grammar tweak:

"This op, like NumPy's remainder, is equivalent to Python's modulus operation, and different from Python's math.remainder and C++'s std::remainder which implement the IEEE remainder.
"

The references for both Python and C++ are great so let's keep them, too

The second paragraph is interesting but I think this first paragraph tells readers everything they need to know, so we can remove the second paragraph

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, @mruberry - Thanks! I've pushed the changes, I'll wait for the CI to build the docs and check the rendered link to just be sure.

Once I think it's good to go, I'll ping you here. Thank you!

P.S: I added NumPy's remainder reference as well, hope that sounds fine?

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work, @krshrimali, and thank you for the careful investigation

I have one minor grammar suggestion; would you just ping me when this is ready to merge?

@krshrimali
Copy link
Contributor Author

krshrimali commented Jul 22, 2021

Hi @mruberry,

Here is the final rendered doc for torch.remainder: https://14902792-65600975-gh.circle-artifacts.com/0/docs/generated/torch.remainder.html. I've also updated the description of this PR, just in case someone lands to this PR if they have the same question that we had (why torch.remainder is inconsistent with std::remainder?)

I think it's ready to be merged, the failures don't look related to this PR.

Thanks!

@facebook-github-bot
Copy link
Contributor

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in cb47d1f.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged module: docs Related to our documentation, both in docs/ and docblocks module: testing Issues related to the torch.testing module (not tests) open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants