KEMBAR78
Fix operator // for integer inputs by lezcano · Pull Request #1120 · pytorch/torchdynamo · GitHub
Skip to content
This repository was archived by the owner on Aug 1, 2025. It is now read-only.

Conversation

@lezcano
Copy link
Contributor

@lezcano lezcano commented Sep 2, 2022

Before this was returning floats rather than ints.

[ghstack-poisoned]
Before this was returning floats rather than ints.

[ghstack-poisoned]
@jansel
Copy link
Contributor

jansel commented Sep 2, 2022

Ick... why don't we just have truediv/floordiv prims that are explicit?

This behavior seems kind of broken.

cc @ezyang @ngimel @Chillee

@lezcano
Copy link
Contributor Author

lezcano commented Sep 2, 2022

What I've seen in PrimTorch is that all the other divs can be implemented (sometimes in fairly non-trivial ways) in terms of this one. Now, the castings that are necessary to achieve this are quite a pain to follow. Took me a fair amount of time to understand what was dispatching to what in that jungle of decorators.

cc @mruberry

@ngimel
Copy link

ngimel commented Sep 2, 2022

Given that we already have codegen for floordiv, I suggest we just add inductor-specific decomposition instead

@register_decomposition([aten.floor_divide.default])
def floordiv(a, b):
    return aten.div.Tensor_mode(a, b, rounding_mode="floor")

(it doesn't require signbit and fmod, and also doesn't pull in buggy prim.div. We could still add signbit and fmod, because why not)

Before this was returning floats rather than ints.

[ghstack-poisoned]
Before this was returning floats rather than ints.

[ghstack-poisoned]
@lezcano
Copy link
Contributor Author

lezcano commented Sep 5, 2022

Cool, done. I'm leaving the fix for prim.div for correctness, in case it pops up in any other decomposition.

@ngimel
Copy link

ngimel commented Sep 5, 2022

Can you add a test for // to test_torchinductor?

Before this was returning floats rather than ints.

[ghstack-poisoned]
@lezcano
Copy link
Contributor Author

lezcano commented Sep 6, 2022

Added the tests. Feel free to merge if they look fine. I also caught an early return in one of the tests (see the last diff). I guess mypy is not run on the tests? @jansel

@ngimel ngimel merged commit a79c23d into gh/Lezcano/1/base Sep 6, 2022
@ngimel
Copy link

ngimel commented Sep 6, 2022

Hm, I'm not sure ghstack PRs merge correctly, can you redo it without ghstack?

@Chillee
Copy link
Contributor

Chillee commented Sep 6, 2022

@ngimel You have to use ghstack land <pr>

@lezcano
Copy link
Contributor Author

lezcano commented Sep 6, 2022

Do I need to do anything?

@desertfire
Copy link
Contributor

I ran ghstack land https://github.com/pytorch/torchdynamo/pull/1120, but it doesn't seem to land to master. @ezyang , anything I am missing?

ezyang pushed a commit that referenced this pull request Sep 7, 2022
Before this was returning floats rather than ints.

ghstack-source-id: ce5a548
Pull Request resolved: #1120
@ezyang
Copy link
Contributor

ezyang commented Sep 7, 2022

looks like it's on master now to me

@desertfire
Copy link
Contributor

#1120

Mind to land https://github.com/pytorch/torchdynamo/pull/1121 as well?

SS-JIA added a commit to pytorch/pytorch that referenced this pull request Sep 26, 2023
## Context

Introduce a core decomposition for `aten.floor_divide` into other `aten` ops, and add it to the core ATen decomposition table.

This replaces the decomposition of `floor_divide` that was used by Inductor. I noticed there was a note on that decomposition

```
# TorchInductor-only decomposition. It should not be taken to core.
# See pytorch/torchdynamo#1120
```

but couldn't discern the reason why this is the case. cc: lezcano 


cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
pytorchmergebot pushed a commit to pytorch/pytorch that referenced this pull request Sep 26, 2023
## Context

Introduce a core decomposition for `aten.floor_divide` into other `aten` ops, and add it to the core ATen decomposition table.

This replaces the decomposition of `floor_divide` that was used by Inductor. I noticed there was a note on that decomposition

```
# TorchInductor-only decomposition. It should not be taken to core.
# See pytorch/torchdynamo#1120
```

but couldn't discern the reason why this is the case. cc: @lezcano

Pull Request resolved: #110046
Approved by: https://github.com/peterbell10
SS-JIA added a commit to pytorch/pytorch that referenced this pull request Sep 26, 2023
…or_divide"


## Context

Introduce a core decomposition for `aten.floor_divide` into other `aten` ops, and add it to the core ATen decomposition table.

This replaces the decomposition of `floor_divide` that was used by Inductor. I noticed there was a note on that decomposition

```
# TorchInductor-only decomposition. It should not be taken to core.
# See pytorch/torchdynamo#1120
```

but couldn't discern the reason why this is the case. cc: lezcano 


cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
SS-JIA added a commit to pytorch/pytorch that referenced this pull request Sep 26, 2023
## Context

Introduce a core decomposition for `aten.floor_divide` into other `aten` ops, and add it to the core ATen decomposition table.

This replaces the decomposition of `floor_divide` that was used by Inductor. I noticed there was a note on that decomposition

```
# TorchInductor-only decomposition. It should not be taken to core.
# See pytorch/torchdynamo#1120
```

but couldn't discern the reason why this is the case. cc: lezcano 


cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants