KEMBAR78
pin_memory support for NT by jbschlosser · Pull Request #110404 · pytorch/pytorch · GitHub
Skip to content

Conversation

@jbschlosser
Copy link
Contributor

@jbschlosser jbschlosser commented Oct 2, 2023

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Oct 2, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/110404

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit 91ee32c with merge base 46a5558 (image):

UNSTABLE - The following jobs failed but were likely due to flakiness present on trunk and has been marked as unstable:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

jbschlosser added a commit that referenced this pull request Oct 2, 2023
ghstack-source-id: cb9b19e
Pull Request resolved: #110404
@jbschlosser jbschlosser requested a review from cpuhrsch October 2, 2023 19:27
@pytorch-bot pytorch-bot bot added the release notes: dataloader release notes category label Oct 4, 2023
jbschlosser added a commit that referenced this pull request Oct 4, 2023
ghstack-source-id: 1700b82
Pull Request resolved: #110404
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okok

@jbschlosser jbschlosser added topic: improvements topic category release notes: nested tensor Changes that have a direct impact on nested tensors and removed release notes: dataloader release notes category labels Oct 4, 2023
@jbschlosser
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Oct 4, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@albanD
Copy link
Collaborator

albanD commented Oct 4, 2023

Don't forget the constexpr change ;) (can be a follow up if this already landed)

@jbschlosser
Copy link
Contributor Author

Don't forget the constexpr change ;)

crap I made it locally and didn't push yet :p

@jbschlosser
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

The merge job was canceled. If you believe this is a mistake, then you can re trigger it through pytorch-bot.

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@jbschlosser jbschlosser force-pushed the gh/jbschlosser/92/head branch from 6328316 to 88eace9 Compare October 4, 2023 21:43
jbschlosser added a commit that referenced this pull request Oct 4, 2023
ghstack-source-id: fcc16e3
Pull Request resolved: #110404
@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: New commits were pushed while merging. Please rerun the merge command.

Details for Dev Infra team Raised by workflow job

@jbschlosser
Copy link
Contributor Author

@pytorchbot merge -f "ignore spurious failures"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@kit1980
Copy link
Contributor

kit1980 commented Oct 5, 2023

@albanD @jbschlosser Looks like this PR or the previous one in the stack broke slow tests: https://github.com/pytorch/pytorch/actions/runs/6424291531/job/17445721752

RuntimeError: CUDA driver API confirmed a leak in main.TestDataLoaderDeviceTypeCUDA.test_nested_tensor_multiprocessing_context_forkserver_cuda! Caching allocator allocated memory was 5120 and is now reported as 10240 on device 0. CUDA driver allocated memory was 340459520 and is now 342556672.

I'm verifying which PR exactly, but likely we'll need to revert this.

@kit1980
Copy link
Contributor

kit1980 commented Oct 6, 2023

@pytorchbot revert -m "Previous PR in the stack caused CUDA memory leaks" -c nosignal

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a revert job. Check the current status here.
Questions? Feedback? Please reach out to the PyTorch DevX Team

@pytorchmergebot
Copy link
Collaborator

@jbschlosser your PR has been successfully reverted.

pytorchmergebot added a commit that referenced this pull request Oct 6, 2023
This reverts commit 3597325.

Reverted #110404 on behalf of https://github.com/kit1980 due to Previous PR in the stack caused CUDA memory leaks ([comment](#110404 (comment)))
@jbschlosser jbschlosser reopened this Oct 6, 2023
jbschlosser added a commit that referenced this pull request Oct 10, 2023
ghstack-source-id: 83880e7
Pull Request resolved: #110404
@jbschlosser
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: nested tensor Changes that have a direct impact on nested tensors Reverted topic: improvements topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants