KEMBAR78
Internal uses of `torch.load` are missing `weights_only` and raise FutureWarning · Issue #130658 · pytorch/pytorch · GitHub
Skip to content

Internal uses of torch.load are missing weights_only and raise FutureWarning #130658

@awaelchli

Description

@awaelchli

🐛 Describe the bug

The PyTorch code base contains several uses of torch.load internally, but not all of them set weights_only, which in PyTorch 2.4 raises a FutureWarning. Since these are internal, the user would see the warning but have no way of acting on it. PyTorch should set weights_only in all of its internal uses.

Some examples (but there are more if you search the code base):

torch_state_dict = torch.load(self.checkpoint_id, map_location="cpu")

obj = torch.load(buffer, map_location=device)

There was a PR recently that resolved one of these important cases:
#130242

These warnings will show up in PyTorch 2.4.
This surfaced in our test suite (PyTorch Lightning) since we have configured them to fail if FutureWarning is raised.

If desired, I can send a PR to fix this.

Versions

torch=2.4.0

cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC

Metadata

Metadata

Assignees

Labels

high priorityoncall: distributedAdd this issue/PR to distributed oncall triage queueoncall: distributed checkpointingOncall label should be attached to any issues related to distributed checkpointing.triage reviewtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions