KEMBAR78
[c10d] Fix data corruption bug after CUDAEventCache is enabled by fduwjj · Pull Request #138040 · pytorch/pytorch · GitHub
Skip to content

Conversation

@fduwjj
Copy link
Contributor

@fduwjj fduwjj commented Oct 16, 2024

Stack from ghstack (oldest at bottom):

Here is why we see using CUDAEventCache cause crash and data corruption.

  1. The deleter is doing its job and append the job the stack.
  2. In create, instead of getting a reference, we are getting a copy of eventsArray_[i] (which is a std::vector). This is bad because we didn't really remove the element from the stack. While we thought we already pop up the last one from the stack, but it turns out the last one is still in the stack; we end up reusing the same event again and again. What's worse, since we keep adding new events to the stack, this will eventually explode the stack and a crash happens.

Fix is easy, just get a reference. Local torchtitan run see a non-Nan loss.

Also we want to use a deque instead of a stack, and refactor the code a bit to make it more readable. (in a separate PR)

cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 16, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/138040

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 31b6143 with merge base a77bb85 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (c10d) release notes category labels Oct 16, 2024
Copy link
Contributor

@kwen2501 kwen2501 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good finding!
I guess in a next PR we can turn on the cache?

@kwen2501 kwen2501 changed the title [c10d] Fix the bug inside CUDAEventCache::create [c10d] Fix data corruption when CUDAEventCache is turned on Oct 16, 2024
@kwen2501 kwen2501 changed the title [c10d] Fix data corruption when CUDAEventCache is turned on [c10d] Fix data corruption when CUDAEventCache is enabled Oct 16, 2024
Here is why we see using `CUDAEventCache` cause crash and data corruption.
1. The deleter is doing its job and append the job the stack.
2. In create, instead of getting a reference, we are getting a copy of eventsArray_[i] (which is a std::vector). This is bad because we didn't really remove the element from the stack. While we thought we already pop up the last one from the stack, but it turns out the last one is still in the stack; we end up reusing the same event again and again. What's worse, since we keep adding new events to the stack, this will eventually explode the stack and a crash happens. 

Fix is easy, just get a reference. Local torchtitan run see a non-Nan loss.
Also we want to use a deque instead of a stack, and refactor the code a bit to make it more readable.



cc XilunWu H-Huang awgu kwen2501 wanchaol fegin wz337 wconstab d4l3k c-p-i-o 


[ghstack-poisoned]
@fduwjj fduwjj changed the title [c10d] Fix data corruption when CUDAEventCache is enabled [c10d] Fix data corruption bug after CUDAEventCache is enabled Oct 16, 2024
@fduwjj fduwjj added the ciflow/trunk Trigger trunk jobs on your pull request label Oct 16, 2024
…bled"


Here is why we see using `CUDAEventCache` cause crash and data corruption.
1. The deleter is doing its job and append the job the stack.
2. In create, instead of getting a reference, we are getting a copy of eventsArray_[i] (which is a std::vector). This is bad because we didn't really remove the element from the stack. While we thought we already pop up the last one from the stack, but it turns out the last one is still in the stack; we end up reusing the same event again and again. What's worse, since we keep adding new events to the stack, this will eventually explode the stack and a crash happens. 

Fix is easy, just get a reference. Local torchtitan run see a non-Nan loss.

Also we want to use a deque instead of a stack, and refactor the code a bit to make it more readable. (in a separate PR)



cc XilunWu H-Huang awgu kwen2501 wanchaol fegin wz337 wconstab d4l3k c-p-i-o 


[ghstack-poisoned]
fduwjj added a commit that referenced this pull request Oct 16, 2024
ghstack-source-id: 5eb67bb
Pull Request resolved: #138040
@fduwjj
Copy link
Contributor Author

fduwjj commented Oct 16, 2024

Separate fix and refactor so that it is better for reviewers.

@fduwjj
Copy link
Contributor Author

fduwjj commented Oct 16, 2024

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@wconstab
Copy link
Contributor

You should probably add a unit test for the cache, that would have caught this.

@fduwjj
Copy link
Contributor Author

fduwjj commented Oct 16, 2024

@wconstab that is a good point, maybe do you have any suggestions on how I can capture this in the unit test added in #137798? I tried to repro the error in unit test but no, it still pass. I even tried out composability test. My guess is that this bug needs a large scale to hit?

@wconstab
Copy link
Contributor

What I meant by unit test is literally a unit test of the cache. Put 3 events in, then get them back out and make sure you get the expected events and the cache is empty at the end.

@fduwjj
Copy link
Contributor Author

fduwjj commented Oct 16, 2024

@wconstab Ahh, I see. Sure, let me write one.

pytorchmergebot pushed a commit that referenced this pull request Oct 16, 2024
…138048)

We used a LIFO stack to store the CudaEvent in the cache. ,Somehow we like FIFO deque better so aside from improving the readability of the code, we use a deque instead. As @wconstab pointed out, both methods are equally correct because the moment we put the event into stack/deque, the event is already ready for reuse, this change mostly is a preference change not trying to fix anything.

Pull Request resolved: #138048
Approved by: https://github.com/kwen2501
ghstack dependencies: #138040
pytorchmergebot pushed a commit that referenced this pull request Oct 16, 2024
…138059)

We created a simple test to validate the cache is indeed working and when the cache is indeed used up. I revert the fix in (#138040) and the test indeed failed.

Pull Request resolved: #138059
Approved by: https://github.com/kwen2501
ghstack dependencies: #138040, #138048
pytorchmergebot pushed a commit that referenced this pull request Oct 16, 2024
Address @kwen2501 's feedback in #138048, add more inline comments to the code.

Pull Request resolved: #138079
Approved by: https://github.com/kwen2501
ghstack dependencies: #138040, #138048, #138059
@github-actions github-actions bot deleted the gh/fduwjj/145/head branch November 16, 2024 02:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (c10d) release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants