-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Closed
Labels
module: optimizerRelated to torch.optimRelated to torch.optimtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Bug
There's an error in docstring of SWALR class in torch.optim.swa_utils.
Default type of last_epoch argument for SWALR is (int) but the default value is 'cos', which should be -1.
If it is an error I would like to contribute by fixing it.
To Reproduce
Steps to reproduce the behavior:
from torch.optim.swa_utils import SWALR
help(SWALR)
Expected behavior
class SWALR(torch.optim.lr_scheduler._LRScheduler)
| SWALR(optimizer, swa_lr, anneal_epochs=10, anneal_strategy='cos', last_epoch=-1)
|
| Anneals the learning rate in each parameter group to a fixed value.
|
| This learning rate scheduler is meant to be used with Stochastic Weight
| Averaging (SWA) method (see `torch.optim.swa_utils.AveragedModel`).
|
| Args:
| optimizer (torch.optim.Optimizer): wrapped optimizer
| swa_lrs (float or list): the learning rate value for all param groups
| together or separately for each group.
| annealing_epochs (int): number of epochs in the annealing phase
| (default: 10)
| annealing_strategy (str): "cos" or "linear"; specifies the annealing
| strategy: "cos" for cosine annealing, "linear" for linear annealing
| (default: "cos")
| last_epoch (int): the index of the last epoch (default: -1)
|
| The :class:`SWALR` scheduler is can be used together with other
| schedulers to switch to a constant learning rate late in the training
| as in the example below.
Environment
- PyTorch Version: 1.9.0+cu102
- OS: Ubuntu 20.04.1 LTS (x86_64)
- How you installed PyTorch: conda
- Python version: 3.8.3 (default, Jul 2 2020, 16:21:59) [GCC 7.3.0] (64-bit runtime)
Metadata
Metadata
Assignees
Labels
module: optimizerRelated to torch.optimRelated to torch.optimtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module