KEMBAR78
[dynamic shapes] prims_common non_overlapping_and_dense by pianpwk · Pull Request #160462 · pytorch/pytorch · GitHub
Skip to content

Conversation

@pianpwk
Copy link
Contributor

@pianpwk pianpwk commented Aug 12, 2025

Differential Revision: D80120333

cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv

@pytorch-bot
Copy link

pytorch-bot bot commented Aug 12, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/160462

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 5670291 with merge base 16ada80 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80120333

@pianpwk pianpwk marked this pull request as draft August 12, 2025 21:10
facebook-github-bot pushed a commit that referenced this pull request Aug 13, 2025
Summary:



Rollback Plan:

Differential Revision: D80120333
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80120333

facebook-github-bot pushed a commit that referenced this pull request Aug 13, 2025
Summary:



Rollback Plan:

Differential Revision: D80120333
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80120333

continue

if guard_size_oblivious(stride != expected_stride):
if not stride_eq(stride, expected_stride):
Copy link
Contributor

@laithsakka laithsakka Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of defining this function stride_eq would it make sense to have this loop in a utility function that is also called from contiguous and use the same approach we use there. which is

def func check_contiguity(sizes, strides):
 expected_stride = 1
    expected_stride_max = 1

    for x, y in reversed(tuple(zip(sizes, strides))):
        # Skips checking strides when a dimension has length 1.
        if guard_or_false(x == 1):
            continue

        if guard_or_true(y != expected_stride) and guard_or_true(
            y != expected_stride_max
        ):
            return False

        #  We symbolically check both paths to maximize the cases where this function
        #  returns true. This is because make_contiguous_strides_for adds the max
        #  symbolically, and in some other situations the max might not be there.
        #  And we want to ensure we return true in both cases.
        expected_stride_max *= x if is_nested_int(x) else sym_max(x, 1)  # type:ignore[assignment]

        expected_stride *= x

    return True



def is_non_overlapping_and_dense(a: Tensor) -> bool:
def _is_non_overlapping_and_dense(sizes, strides) -> bool:
Copy link
Contributor

@laithsakka laithsakka Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

call the function _is_non_overlapping_and_dense_or_false.
"""may return false instead of throwing DDE at data dependency when the result is uknown"

facebook-github-bot pushed a commit that referenced this pull request Aug 13, 2025
Summary:



Rollback Plan:

Differential Revision: D80120333
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80120333

facebook-github-bot pushed a commit that referenced this pull request Aug 15, 2025
Summary: Pull Request resolved: #160462

Test Plan:
test_dynamic_shapes

Rollback Plan:

Differential Revision: D80120333
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80120333

@pianpwk pianpwk changed the title [WIP][dynamic shapes] prims_common non_overlapping_and_dense [dynamic shapes] prims_common non_overlapping_and_dense Aug 15, 2025
@pianpwk pianpwk requested a review from laithsakka August 15, 2025 18:41
@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Aug 15, 2025
@pianpwk pianpwk marked this pull request as ready for review August 15, 2025 18:41

def test_prims_non_overlapping_and_dense(self):
shape_env = ShapeEnv()
cf = torch._prims_common.is_non_overlapping_and_dense
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is_non_overlapping_and_dense-> is_non_overlapping_and_dense_or_false?

for length, stride in lengths_and_strides:
if guard_or_false(length == 1):
continue
def is_non_overlapping_and_dense(a: Tensor) -> bool:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this for BC compatibility?
In my mind calling is_non_overlapping_and_dense should throw DDE
calling _is_non_overlapping_and_dense_or_false should not
now this is sort of alias for _is_non_overlapping_and_dense_or_false
meaning unbacked semantics are implicit to users calling is_non_overlapping_and_dense

maybe its ok

# for backed symbols, this is practically a < operation
# for unbacked, we return True if < is statically known,
# then try to answer this symbolically, with stride ordering semantics
# (e.g. u0 < u0 is False, u0 < u1 is False with no axioms, u0 < 2 * u0 is True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add a comnent explaining why its ok if in the worst case for unbacked we picked a wrong order.

def __ge__(self, other):
return guard_size_oblivious(self.stride >= other.stride)
# verify that sorted order was imposed (checks the "non-overlapping condition")
for i, j in zip(lengths_and_strides[:-1], lengths_and_strides[1:]):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do you need this?

def __ge__(self, other):
return guard_size_oblivious(self.stride >= other.stride)
# verify that sorted order was imposed (checks the "non-overlapping condition")
for i, j in zip(lengths_and_strides[:-1], lengths_and_strides[1:]):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does anything fail if you remove this? I do not see why we would need it?

facebook-github-bot pushed a commit that referenced this pull request Aug 15, 2025
Summary: Pull Request resolved: #160462

Test Plan:
test_dynamic_shapes

Rollback Plan:

Reviewed By: laithsakka

Differential Revision: D80120333
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80120333

pianpwk added a commit that referenced this pull request Aug 15, 2025
Summary: Pull Request resolved: #160462

Test Plan:
test_dynamic_shapes

Rollback Plan:

Reviewed By: laithsakka

Differential Revision: D80120333
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80120333

pianpwk added a commit that referenced this pull request Aug 15, 2025
Summary: Pull Request resolved: #160462

Test Plan:
test_dynamic_shapes

Rollback Plan:

Reviewed By: laithsakka

Differential Revision: D80120333
facebook-github-bot pushed a commit that referenced this pull request Aug 15, 2025
Summary: Pull Request resolved: #160462

Test Plan:
test_dynamic_shapes

Rollback Plan:

Reviewed By: laithsakka

Differential Revision: D80120333
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80120333

facebook-github-bot pushed a commit that referenced this pull request Aug 18, 2025
Summary: Pull Request resolved: #160462

Test Plan:
test_dynamic_shapes

Rollback Plan:

Reviewed By: laithsakka

Differential Revision: D80120333
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80120333

Summary: Pull Request resolved: #160462

Test Plan:
test_dynamic_shapes

Rollback Plan:

Reviewed By: laithsakka

Differential Revision: D80120333
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D80120333

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge -i

(Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally)

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged while ignoring the following 1 checks: pull / linux-jammy-py3.13-clang12 / test (default, 5, 5, linux.4xlarge)

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

can-gaa-hou pushed a commit to can-gaa-hou/pytorch that referenced this pull request Aug 22, 2025
markc-614 pushed a commit to markc-614/pytorch that referenced this pull request Sep 17, 2025
@github-actions github-actions bot deleted the export-D80120333 branch September 18, 2025 02:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request fb-exported fx Merged release notes: fx release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants