-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[c10d] Fix data corruption bug after CUDAEventCache is enabled #138040
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/138040
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 31b6143 with merge base a77bb85 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good finding!
I guess in a next PR we can turn on the cache?
Here is why we see using `CUDAEventCache` cause crash and data corruption. 1. The deleter is doing its job and append the job the stack. 2. In create, instead of getting a reference, we are getting a copy of eventsArray_[i] (which is a std::vector). This is bad because we didn't really remove the element from the stack. While we thought we already pop up the last one from the stack, but it turns out the last one is still in the stack; we end up reusing the same event again and again. What's worse, since we keep adding new events to the stack, this will eventually explode the stack and a crash happens. Fix is easy, just get a reference. Local torchtitan run see a non-Nan loss. Also we want to use a deque instead of a stack, and refactor the code a bit to make it more readable. cc XilunWu H-Huang awgu kwen2501 wanchaol fegin wz337 wconstab d4l3k c-p-i-o [ghstack-poisoned]
…bled" Here is why we see using `CUDAEventCache` cause crash and data corruption. 1. The deleter is doing its job and append the job the stack. 2. In create, instead of getting a reference, we are getting a copy of eventsArray_[i] (which is a std::vector). This is bad because we didn't really remove the element from the stack. While we thought we already pop up the last one from the stack, but it turns out the last one is still in the stack; we end up reusing the same event again and again. What's worse, since we keep adding new events to the stack, this will eventually explode the stack and a crash happens. Fix is easy, just get a reference. Local torchtitan run see a non-Nan loss. Also we want to use a deque instead of a stack, and refactor the code a bit to make it more readable. (in a separate PR) cc XilunWu H-Huang awgu kwen2501 wanchaol fegin wz337 wconstab d4l3k c-p-i-o [ghstack-poisoned]
|
Separate fix and refactor so that it is better for reviewers. |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
You should probably add a unit test for the cache, that would have caught this. |
|
What I meant by unit test is literally a unit test of the cache. Put 3 events in, then get them back out and make sure you get the expected events and the cache is empty at the end. |
|
@wconstab Ahh, I see. Sure, let me write one. |
…138048) We used a LIFO stack to store the CudaEvent in the cache. ,Somehow we like FIFO deque better so aside from improving the readability of the code, we use a deque instead. As @wconstab pointed out, both methods are equally correct because the moment we put the event into stack/deque, the event is already ready for reuse, this change mostly is a preference change not trying to fix anything. Pull Request resolved: #138048 Approved by: https://github.com/kwen2501 ghstack dependencies: #138040
Stack from ghstack (oldest at bottom):
Here is why we see using
CUDAEventCachecause crash and data corruption.Fix is easy, just get a reference. Local torchtitan run see a non-Nan loss.
Also we want to use a deque instead of a stack, and refactor the code a bit to make it more readable. (in a separate PR)
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o