-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[dynamic shapes] prims_common non_overlapping_and_dense #160462
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/160462
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 5670291 with merge base 16ada80 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D80120333 |
Summary: Rollback Plan: Differential Revision: D80120333
32b6307 to
3c407d9
Compare
|
This pull request was exported from Phabricator. Differential Revision: D80120333 |
Summary: Rollback Plan: Differential Revision: D80120333
3c407d9 to
8e04bc0
Compare
|
This pull request was exported from Phabricator. Differential Revision: D80120333 |
torch/_prims_common/__init__.py
Outdated
| continue | ||
|
|
||
| if guard_size_oblivious(stride != expected_stride): | ||
| if not stride_eq(stride, expected_stride): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of defining this function stride_eq would it make sense to have this loop in a utility function that is also called from contiguous and use the same approach we use there. which is
def func check_contiguity(sizes, strides):
expected_stride = 1
expected_stride_max = 1
for x, y in reversed(tuple(zip(sizes, strides))):
# Skips checking strides when a dimension has length 1.
if guard_or_false(x == 1):
continue
if guard_or_true(y != expected_stride) and guard_or_true(
y != expected_stride_max
):
return False
# We symbolically check both paths to maximize the cases where this function
# returns true. This is because make_contiguous_strides_for adds the max
# symbolically, and in some other situations the max might not be there.
# And we want to ensure we return true in both cases.
expected_stride_max *= x if is_nested_int(x) else sym_max(x, 1) # type:ignore[assignment]
expected_stride *= x
return True
torch/_prims_common/__init__.py
Outdated
|
|
||
|
|
||
| def is_non_overlapping_and_dense(a: Tensor) -> bool: | ||
| def _is_non_overlapping_and_dense(sizes, strides) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
call the function _is_non_overlapping_and_dense_or_false.
"""may return false instead of throwing DDE at data dependency when the result is uknown"
Summary: Rollback Plan: Differential Revision: D80120333
8e04bc0 to
8a8a691
Compare
|
This pull request was exported from Phabricator. Differential Revision: D80120333 |
Summary: Pull Request resolved: #160462 Test Plan: test_dynamic_shapes Rollback Plan: Differential Revision: D80120333
8a8a691 to
ae88f5a
Compare
|
This pull request was exported from Phabricator. Differential Revision: D80120333 |
|
|
||
| def test_prims_non_overlapping_and_dense(self): | ||
| shape_env = ShapeEnv() | ||
| cf = torch._prims_common.is_non_overlapping_and_dense |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is_non_overlapping_and_dense-> is_non_overlapping_and_dense_or_false?
| for length, stride in lengths_and_strides: | ||
| if guard_or_false(length == 1): | ||
| continue | ||
| def is_non_overlapping_and_dense(a: Tensor) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this for BC compatibility?
In my mind calling is_non_overlapping_and_dense should throw DDE
calling _is_non_overlapping_and_dense_or_false should not
now this is sort of alias for _is_non_overlapping_and_dense_or_false
meaning unbacked semantics are implicit to users calling is_non_overlapping_and_dense
maybe its ok
| # for backed symbols, this is practically a < operation | ||
| # for unbacked, we return True if < is statically known, | ||
| # then try to answer this symbolically, with stride ordering semantics | ||
| # (e.g. u0 < u0 is False, u0 < u1 is False with no axioms, u0 < 2 * u0 is True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add a comnent explaining why its ok if in the worst case for unbacked we picked a wrong order.
torch/_prims_common/__init__.py
Outdated
| def __ge__(self, other): | ||
| return guard_size_oblivious(self.stride >= other.stride) | ||
| # verify that sorted order was imposed (checks the "non-overlapping condition") | ||
| for i, j in zip(lengths_and_strides[:-1], lengths_and_strides[1:]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do you need this?
torch/_prims_common/__init__.py
Outdated
| def __ge__(self, other): | ||
| return guard_size_oblivious(self.stride >= other.stride) | ||
| # verify that sorted order was imposed (checks the "non-overlapping condition") | ||
| for i, j in zip(lengths_and_strides[:-1], lengths_and_strides[1:]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does anything fail if you remove this? I do not see why we would need it?
Summary: Pull Request resolved: #160462 Test Plan: test_dynamic_shapes Rollback Plan: Reviewed By: laithsakka Differential Revision: D80120333
ae88f5a to
a1e2be4
Compare
|
This pull request was exported from Phabricator. Differential Revision: D80120333 |
Summary: Pull Request resolved: #160462 Test Plan: test_dynamic_shapes Rollback Plan: Reviewed By: laithsakka Differential Revision: D80120333
a1e2be4 to
0fa3ef6
Compare
|
This pull request was exported from Phabricator. Differential Revision: D80120333 |
Summary: Pull Request resolved: #160462 Test Plan: test_dynamic_shapes Rollback Plan: Reviewed By: laithsakka Differential Revision: D80120333
0fa3ef6 to
a9c5959
Compare
Summary: Pull Request resolved: #160462 Test Plan: test_dynamic_shapes Rollback Plan: Reviewed By: laithsakka Differential Revision: D80120333
a9c5959 to
c2b5724
Compare
|
This pull request was exported from Phabricator. Differential Revision: D80120333 |
Summary: Pull Request resolved: #160462 Test Plan: test_dynamic_shapes Rollback Plan: Reviewed By: laithsakka Differential Revision: D80120333
c2b5724 to
8ff8bb0
Compare
|
This pull request was exported from Phabricator. Differential Revision: D80120333 |
Summary: Pull Request resolved: #160462 Test Plan: test_dynamic_shapes Rollback Plan: Reviewed By: laithsakka Differential Revision: D80120333
|
This pull request was exported from Phabricator. Differential Revision: D80120333 |
8ff8bb0 to
5670291
Compare
|
@pytorchbot merge -i (Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally) |
Merge startedYour change will be merged while ignoring the following 1 checks: pull / linux-jammy-py3.13-clang12 / test (default, 5, 5, linux.4xlarge) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Differential Revision: D80120333 Pull Request resolved: pytorch#160462 Approved by: https://github.com/laithsakka
Differential Revision: D80120333 Pull Request resolved: pytorch#160462 Approved by: https://github.com/laithsakka
Differential Revision: D80120333
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv