KEMBAR78
[doc][hackathon] To add Adagrad Optimizer to the documentation by iramazanli · Pull Request #63254 · pytorch/pytorch · GitHub
Skip to content

Conversation

@iramazanli
Copy link
Contributor

@iramazanli iramazanli commented Aug 13, 2021

It has been discussed before that adding description of Optimization algorithms to PyTorch Core documentation may result in a nice Optimization research tutorial. In the following tracking issue we mentioned about all the necessary algorithms and links to the originally published paper #63236.

In this PR we are adding description of Adagrad to the documentation. For more details, we refer to the paper
http://jmlr.org/papers/v12/duchi11a.html

AdaGradAlgo

cc @vincentqb @iramazanli

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Aug 13, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 464a9fa (more details on the Dr. CI page):


  • 2/2 failures introduced in this PR

🕵️ 2 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_build (1/2)

Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
CONFLICT (add/add): Merge conflict in .circleci/scripts/binary_macos_build.sh
Auto-merging .circleci/scripts/binary_macos_build.sh
CONFLICT (add/add): Merge conflict in .circleci/config.yml
Auto-merging .circleci/config.yml
CONFLICT (add/add): Merge conflict in .circleci/cimodel/data/pytorch_build_definitions.py
Auto-merging .circleci/cimodel/data/pytorch_build_definitions.py
CONFLICT (add/add): Merge conflict in .circleci/cimodel/data/pytorch_build_data.py
Auto-merging .circleci/cimodel/data/pytorch_build_data.py
CONFLICT (add/add): Merge conflict in .bazelrc
Auto-merging .bazelrc
Automatic merge failed; fix conflicts and then commit the result.


Exited with code exit status 1

See CircleCI build pytorch_linux_xenial_py3_6_gcc5_4_build (2/2)

Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
CONFLICT (add/add): Merge conflict in .circleci/scripts/binary_macos_build.sh
Auto-merging .circleci/scripts/binary_macos_build.sh
CONFLICT (add/add): Merge conflict in .circleci/config.yml
Auto-merging .circleci/config.yml
CONFLICT (add/add): Merge conflict in .circleci/cimodel/data/pytorch_build_definitions.py
Auto-merging .circleci/cimodel/data/pytorch_build_definitions.py
CONFLICT (add/add): Merge conflict in .circleci/cimodel/data/pytorch_build_data.py
Auto-merging .circleci/cimodel/data/pytorch_build_data.py
CONFLICT (add/add): Merge conflict in .bazelrc
Auto-merging .bazelrc
Automatic merge failed; fix conflicts and then commit the result.


Exited with code exit status 1


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@iramazanli iramazanli changed the title To add Adagrad Optimizer to the documentation [doc][hackathon] To add Adagrad Optimizer to the documentation Aug 15, 2021
@iramazanli iramazanli force-pushed the adagrad_algorithm_doc branch 4 times, most recently from 348a347 to 2d807b3 Compare August 25, 2021 13:44
@codecov
Copy link

codecov bot commented Aug 25, 2021

Codecov Report

Merging #63254 (4564618) into master (49b782b) will decrease coverage by 0.40%.
The diff coverage is n/a.

❗ Current head 4564618 differs from pull request most recent head 464a9fa. Consider uploading reports for the commit 464a9fa to get more accurate results

@@            Coverage Diff             @@
##           master   #63254      +/-   ##
==========================================
- Coverage   66.73%   66.33%   -0.41%     
==========================================
  Files         695      703       +8     
  Lines       90833    92223    +1390     
==========================================
+ Hits        60618    61175     +557     
- Misses      30215    31048     +833     

@iramazanli iramazanli force-pushed the adagrad_algorithm_doc branch 8 times, most recently from 46d58f3 to a3b12d1 Compare August 27, 2021 20:51
@iramazanli iramazanli requested a review from albanD August 30, 2021 14:10
@albanD
Copy link
Collaborator

albanD commented Aug 30, 2021

I don't see initial_accumulator_value and lr_decay?

Also should we mention what is the behavior when the gradient is sparse?

@iramazanli
Copy link
Contributor Author

I don't see initial_accumulator_value and lr_decay?

Also should we mention what is the behavior when the gradient is sparse?

  • The reason initial_accumulator_value is skipped is because it is skipped in Args section here :

    which i tried to keep consistent with pseudocode.

  • The reason I didn't add lr_decay is little bit conceptual. Indeed any optimization algorithm could be called with any kind of learning rate schedulers, and for each type of scheduler look of pseudocode documented here might change. In particular, i choose \gamma instead of \gamma_t in all these algorithms to illustrate focus on optimization algorithm rather other learning rate / scheduler.

Let me know if you have feedbacks regarding these two point.

Regarding sparse behavior, i would be happy to follow your suggestion.

@albanD
Copy link
Collaborator

albanD commented Aug 30, 2021

The reason initial_accumulator_value is skipped is because it is skipped in Args section here :

Is that done on purpose? It looks like the args doc need to be updated.

In any case, in the rendered version, the argument is present in the signature, so I think it should be documented properly.

The reason I didn't add lr_decay is little bit conceptual.

I agree with your point that we might want to discourage people from using this argument.
But I feel like the goal of these pseudo code is to explain what the implementation is doing. And so it should reflect all the arguments.
If we want to ask people not to use this (or even deprecate it) that can be mentioned in the args doc below.

@iramazanli iramazanli force-pushed the adagrad_algorithm_doc branch 2 times, most recently from f3e7b0e to 4564618 Compare August 30, 2021 23:20
@iramazanli
Copy link
Contributor Author

The reason initial_accumulator_value is skipped is because it is skipped in Args section here :

Is that done on purpose? It looks like the args doc need to be updated.

In any case, in the rendered version, the argument is present in the signature, so I think it should be documented properly.

The reason I didn't add lr_decay is little bit conceptual.

I agree with your point that we might want to discourage people from using this argument.
But I feel like the goal of these pseudo code is to explain what the implementation is doing. And so it should reflect all the arguments.
If we want to ask people not to use this (or even deprecate it) that can be mentioned in the args doc below.

It's done ! thanks for the comments.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit you don't want to assign to gamma here right? The new value should not be used by the next step right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update is done

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this g_t^2 ?
Ho I guess that works with your definition of G.
But why not just use g_t^2 and remove the diag on the line below?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated to the shape your recommended

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit missing epsilon

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added it

@iramazanli iramazanli force-pushed the adagrad_algorithm_doc branch from 4564618 to 9cc53de Compare September 9, 2021 18:39
@iramazanli iramazanli force-pushed the adagrad_algorithm_doc branch from 9cc53de to 464a9fa Compare September 9, 2021 19:20
@facebook-github-bot
Copy link
Contributor

@iramazanli has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@iramazanli merged this pull request in d4b09db.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants