Fix no_trainer examples to properly calculate the number of samples #17046
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fix number of samples for
no_trainerscriptsWhat does this add?
This PR fixes all of the no_trainer scripts to properly use the right number of training steps after the length of the dataloader was changed with
accelerator.prepareWhy is it needed?
Currently in a multi-process setup, the progress bar still shows the old number of samples. As a result the old number of steps before breaking is set at the original amount, even though the length of the dataloaders changed. The progress bar reflects this too.
Simplified example:
If the dataloader starts with 128 batches, if 2 GPUs are used then each dataloader has 64 batches. As a result the progress bar should use
64, and the break condition needs to also know there is only 64. Both currently use 128 stillWhat parts of the API does this impact?
User-facing:
All scripts have a recalculation of the max_train_steps after
accelerate.prepareBasic Usage Example(s):
When would I use it, and when wouldn't I?
While this is always used, technically it is only needed when the number of nodes > 1.