KEMBAR78
Need a test to make sure the autograd engine inserts proper leaf stream syncs for stolen gradients · Issue #59846 · pytorch/pytorch · GitHub
Skip to content

Need a test to make sure the autograd engine inserts proper leaf stream syncs for stolen gradients #59846

@mcarilli

Description

@mcarilli

If a leaf tensor's .grad is None before backward, its AccumulateGrad function may steal a reference to the incoming gradient from whatever backward op produced it (instead of accumulated onto an existing .grad). @mruberry @ngimel and I are semi-confident AccumulateGrad functions and the autograd engine insert the right leaf stream syncs (such that ops following backward() can safely immediately use stolen .grads) but I should double check the code and PR a dedicated test. Filing so I don't forget.

cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @lezcano @ngimel

Metadata

Metadata

Assignees

Labels

module: autogradRelated to torch.autograd, and the autograd engine in generalmodule: cudaRelated to torch.cuda, and CUDA support in generaltriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions