KEMBAR78
Implement split_with_sizes backward for NT by jbschlosser · Pull Request #110647 · pytorch/pytorch · GitHub
Skip to content

Conversation

@jbschlosser
Copy link
Contributor

@jbschlosser jbschlosser commented Oct 5, 2023

Stack from ghstack (oldest at bottom):

Needed internally. Note that split_with_sizes() for NT is currently supported only on dim=-1.

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 5, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/110647

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit b20d4cb with merge base a3e5ec4 (image):

UNSTABLE - The following jobs failed but were likely due to flakiness present on trunk and has been marked as unstable:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@jbschlosser jbschlosser requested a review from cpuhrsch October 5, 2023 21:02
Copy link
Contributor

@cpuhrsch cpuhrsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stamp

@jbschlosser jbschlosser added release notes: nested tensor Changes that have a direct impact on nested tensors topic: improvements topic category labels Oct 5, 2023
@jbschlosser
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Oct 5, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

const Tensor& nt_sizes,
const at::TensorOptions& options) {
// add 1 to account for batch dim
dim = at::maybe_wrap_dim(dim, static_cast<int64_t>(nt_sizes.size(1)) + 1);
Copy link
Contributor

@soulitzer soulitzer Oct 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: maybe we don't need (some of) the static casts?

@pytorchmergebot
Copy link
Collaborator

The merge job was canceled. If you believe this is a mistake, then you can re trigger it through pytorch-bot.

@kit1980
Copy link
Contributor

kit1980 commented Oct 5, 2023

I've cancelled the merge as #110404 is likely to be reverted.

Needed internally. Note that `split_with_sizes()` for NT is currently supported only on `dim=-1`.

[ghstack-poisoned]
jbschlosser added a commit that referenced this pull request Oct 6, 2023
ghstack-source-id: 17a6669
Pull Request resolved: #110647
@jbschlosser
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@facebook-github-bot facebook-github-bot deleted the gh/jbschlosser/94/head branch October 10, 2023 14:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: nested tensor Changes that have a direct impact on nested tensors topic: improvements topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants