-
Notifications
You must be signed in to change notification settings - Fork 671
Enable bf16 output in TBE CPU kernel for other input types #1851
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
✅ Deploy Preview for pytorch-fbgemm-docs canceled.
|
|
This pull request was exported from Phabricator. Differential Revision: D47028021 |
) Summary: Pull Request resolved: pytorch#1851 Enable bf16 output support in TBE CPU kernel when the input weight type is int8/fp8/fp16/fp32 Differential Revision: D47028021 fbshipit-source-id: b49b8f8bfc6932c5665daa4b810fcea1f306a077
39fee8a to
06ad33b
Compare
|
This pull request was exported from Phabricator. Differential Revision: D47028021 |
) Summary: Pull Request resolved: pytorch#1851 Enable bf16 output support in TBE CPU kernel when the input weight type is int8/fp8/fp16/fp32 Differential Revision: D47028021 fbshipit-source-id: b52a672c7aeb175746b62ea6dfbbad81eec14975
06ad33b to
4595c55
Compare
|
This pull request was exported from Phabricator. Differential Revision: D47028021 |
) Summary: Pull Request resolved: pytorch#1851 Enable bf16 output support in TBE CPU kernel when the input weight type is int8/fp8/fp16/fp32 Differential Revision: D47028021 fbshipit-source-id: 73802deb5e17d4da4b84c7d75c6a5c3c415b46d6
4595c55 to
ebde42d
Compare
|
This pull request was exported from Phabricator. Differential Revision: D47028021 |
) Summary: Pull Request resolved: pytorch#1851 Enable bf16 output support in TBE CPU kernel when the input weight type is int8/fp8/fp16/fp32 Differential Revision: D47028021 fbshipit-source-id: 94c48bd4fc68da13505d8080c5773e214da0fef4
ebde42d to
07aa523
Compare
|
This pull request was exported from Phabricator. Differential Revision: D47028021 |
) Summary: Pull Request resolved: pytorch#1851 Enable bf16 output support in TBE CPU kernel when the input weight type is int8/fp8/fp16/fp32 Differential Revision: D47028021 fbshipit-source-id: 6f759ef4b072e6e8b9d78b550497c1ebe1b3483d
07aa523 to
fd86ff8
Compare
|
This pull request was exported from Phabricator. Differential Revision: D47028021 |
) Summary: Pull Request resolved: pytorch#1851 Enable bf16 output support in TBE CPU kernel when the input weight type is int8/fp8/fp16/fp32 Differential Revision: D47028021 fbshipit-source-id: 8721ba1aa097702ae6a0844d312929124ed3448e
fd86ff8 to
bdb0f81
Compare
|
This pull request was exported from Phabricator. Differential Revision: D47028021 |
) Summary: Pull Request resolved: pytorch#1851 Enable bf16 output support in TBE CPU kernel when the input weight type is int8/fp8/fp16/fp32 Differential Revision: D47028021 fbshipit-source-id: cc09c0c60cca6f4b72036d2e28e452f4f9334b5c
bdb0f81 to
6f779ef
Compare
|
This pull request was exported from Phabricator. Differential Revision: D47028021 |
) Summary: Pull Request resolved: pytorch#1851 Enable bf16 output support in TBE CPU kernel when the input weight type is int8/fp8/fp16/fp32 Differential Revision: D47028021 fbshipit-source-id: 14f0a8a33b906ac531a8bb5d212f74b7eb2cf00d
6f779ef to
2ab6f2f
Compare
|
This pull request was exported from Phabricator. Differential Revision: D47028021 |
) Summary: Pull Request resolved: pytorch#1851 Enable bf16 output support in TBE CPU kernel when the input weight type is int8/fp8/fp16/fp32 Differential Revision: D47028021 fbshipit-source-id: aca192f78900a09eee6cda2d660cf4761c765deb
2ab6f2f to
4a69c37
Compare
|
This pull request was exported from Phabricator. Differential Revision: D47028021 |
) Summary: Pull Request resolved: pytorch#1851 Enable bf16 output support in TBE CPU kernel when the input weight type is int8/fp8/fp16/fp32 Differential Revision: D47028021 fbshipit-source-id: 61301ea038ffa358550dccc346e4df3096781901
4a69c37 to
dae96c5
Compare
|
This pull request was exported from Phabricator. Differential Revision: D47028021 |
) Summary: Pull Request resolved: pytorch#1851 Enable bf16 output support in TBE CPU kernel when the input weight type is int8/fp8/fp16/fp32 Differential Revision: D47028021 fbshipit-source-id: d8b83bb3fb4c477c361a1f062e3c254d840e01d2
dae96c5 to
9c62202
Compare
|
This pull request was exported from Phabricator. Differential Revision: D47028021 |
) Summary: Pull Request resolved: pytorch#1851 Enable bf16 output support in TBE CPU kernel when the input weight type is int8/fp8/fp16/fp32 Reviewed By: sryap Differential Revision: D47028021 fbshipit-source-id: 26bd071461ae65ceafc0d27c2233ec340bb24180
9c62202 to
a7e673b
Compare
|
This pull request was exported from Phabricator. Differential Revision: D47028021 |
) Summary: Pull Request resolved: pytorch#1851 Enable bf16 output support in TBE CPU kernel when the input weight type is int8/fp8/fp16/fp32 Reviewed By: sryap Differential Revision: D47028021 fbshipit-source-id: b71c123e4f76d92f0e85bb0e00512b57a02695c5
a7e673b to
b6708d1
Compare
|
This pull request was exported from Phabricator. Differential Revision: D47028021 |
) Summary: Pull Request resolved: pytorch#1851 Enable bf16 output support in TBE CPU kernel when the input weight type is int8/fp8/fp16/fp32 Reviewed By: sryap Differential Revision: D47028021 fbshipit-source-id: 8c3f09113541ce813a424cdf2f39a4679cd2115d
b6708d1 to
374b556
Compare
|
This pull request was exported from Phabricator. Differential Revision: D47028021 |
|
This pull request has been merged in ed4fe6e. |
By updating fbgemm submodule Add regression test for it (though it can probably be limited to just CPU as reproducer only works if num_threads is 1) Also, update call-sites to `fbgemm:: GenerateEmbeddingSpMDM` to pass `isbf16` twice, to match API changes introduced in pytorch/FBGEMM#1851 Fixes #111189 and #111710 Pull Request resolved: #111672 Approved by: https://github.com/Skylion007
By updating fbgemm submodule Add regression test for it (though it can probably be limited to just CPU as reproducer only works if num_threads is 1) Also, update call-sites to `fbgemm:: GenerateEmbeddingSpMDM` to pass `isbf16` twice, to match API changes introduced in pytorch/FBGEMM#1851 Fixes pytorch#111189 and pytorch#111710 Pull Request resolved: pytorch#111672 Approved by: https://github.com/Skylion007
By updating fbgemm submodule Add regression test for it (though it can probably be limited to just CPU as reproducer only works if num_threads is 1) Also, update call-sites to `fbgemm:: GenerateEmbeddingSpMDM` to pass `isbf16` twice, to match API changes introduced in pytorch/FBGEMM#1851 Fixes pytorch#111189 and pytorch#111710 Pull Request resolved: pytorch#111672 Approved by: https://github.com/Skylion007
Summary: Enable bf16 output support in TBE CPU kernel when the input weight type is int8/fp8/fp16/fp32
Differential Revision: D47028021