KEMBAR78
Type hint complete Albert model file. by karthikrangasai · Pull Request #16682 · huggingface/transformers · GitHub
Skip to content

Conversation

@karthikrangasai
Copy link
Contributor

What does this PR do?

Type hint the Albert Model file for PyTorch Model.

Part of #16059

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@Rocketknight1

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Apr 9, 2022

The documentation is not available anymore as the PR was closed or merged.

Copy link
Contributor

@kumar-abhishek kumar-abhishek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me.

@Rocketknight1
Copy link
Member

Hi, I'm sorry for the delay with this one! We really appreciate it, and I'm trying to get a chance to look through it all ASAP!

Comment on lines 824 to 839
AlbertForPreTrainingOutput,
Tuple[torch.Tensor, Optional[torch.Tensor]],
Tuple[torch.Tensor, Optional[torch.Tensor], Tuple[torch.Tensor, ...]],
Tuple[torch.Tensor, Optional[torch.Tensor], Tuple[Tuple[torch.Tensor, ...], ...]],
Tuple[torch.Tensor, Optional[torch.Tensor], Tuple[torch.Tensor, ...], Tuple[Tuple[torch.Tensor, ...], ...]],
# if total_loss is not None
Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor]],
Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor], Tuple[torch.Tensor, ...]],
Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor], Tuple[Tuple[torch.Tensor, ...], ...]],
Tuple[
torch.Tensor,
torch.Tensor,
Optional[torch.Tensor],
Tuple[torch.Tensor, ...],
Tuple[Tuple[torch.Tensor, ...], ...],
],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is wild! I would have absolutely been lazy and just left it as Union[AlbertForPreTrainingOutput, Tuple], but honestly good job.

Copy link
Member

@Rocketknight1 Rocketknight1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great, thank you! The return type annotations are incredibly complete - I'm going to check with the team if that's okay; it might become very difficult to maintain that if we ever change the function signature.

It'll pain me to see it go if we do decide to just use Tuple, though, because it was a thing of beauty.

@karthikrangasai
Copy link
Contributor Author

Thanks a lot and I am glad that you liked the work.
More of these will be coming in a few days for other models, so it would be great if I know up to what level of detailing is needed for the Type Annotations.

@Rocketknight1
Copy link
Member

@karthikrangasai I spoke to the team and the conclusion was that we should just use Union[AlbertForPreTrainingOutput, Tuple] - as well as being easier for you, when these type hints are copied into the documentation, it'll be a lot more readable that way.

That said, I respect the dedication and precision that went into making it, so I'm a little sad to see it go.

@karthikrangasai
Copy link
Contributor Author

Hello @Rocketknight1 ,

Sure, no worries.
I can make the changes and update the Pull Request in a while.

Although I am not sure what the reason is for to have return type a Tuple, a suggestion is to maybe remove the return type as "Tuple" and make it the respective output type for all models. This will have keyword based output values like

output = AlbertModel(**inputs)

output.last_hidden_state
output.pooler_output

and this might become a more cleaner API.

Copy link
Member

@Rocketknight1 Rocketknight1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this looks great, thank you! And sorry for the slow reply. I made one change, but other than that we should be good to go!

@Rocketknight1
Copy link
Member

Seeing some code quality issues, I'm guessing because we changed our versions for code formatting tools and might need to rebase. Let me check!

@Rocketknight1
Copy link
Member

It seems like you'll need to pull commits from our repo to the main branch of your repo, then rebase your branch, then force push to update. After that, the files should be updated and the error should go away!

@karthikrangasai
Copy link
Contributor Author

Hello @Rocketknight1, I have updated the PR with the main branch.
All tests are passing.

@Rocketknight1
Copy link
Member

Great job. Thank you for this PR, it's much appreciated!

@Rocketknight1 Rocketknight1 merged commit 9c5ae87 into huggingface:main May 4, 2022
elusenji pushed a commit to elusenji/transformers that referenced this pull request Jun 12, 2022
* Type hint complete Albert model file.

* Update typing.

* Update src/transformers/models/albert/modeling_albert.py

Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants