-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Closed
Labels
low priorityWe're unlikely to get around to doing this in the near futureWe're unlikely to get around to doing this in the near futuremodule: bfloat16module: cpuCPU specific problem (e.g., perf, algorithm)CPU specific problem (e.g., perf, algorithm)module: docsRelated to our documentation, both in docs/ and docblocksRelated to our documentation, both in docs/ and docblocksmodule: python frontendFor issues relating to PyTorch's Python frontendFor issues relating to PyTorch's Python frontendtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Describe the bug
Output of arange bf16 is not correct. use below code to reproduce
torch.arange(241,273, dtype=torch.bfloat16)
Output:
tensor([241., 242., 243., 244., 245., 246., 247., 248., 249., 250., 251., 252., 253., 254., 255., **256., 256., 256**., 258., 260., 260., 260., 262., 264., 264., 264., 266., 268., 268., 268., 270., 272.], dtype=torch.bfloat16)
Here after 255 the results are shown as [256, 256, 256] which is wrong as in f32 output is [256, 257, 258] but 257 is not representable in bf16 hence it should be [256, 256, 258]
Versions
torch==2.4.1
### Tasks
cc @svekars @brycebortree @sekyondaMeta @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @albanD
Metadata
Metadata
Assignees
Labels
low priorityWe're unlikely to get around to doing this in the near futureWe're unlikely to get around to doing this in the near futuremodule: bfloat16module: cpuCPU specific problem (e.g., perf, algorithm)CPU specific problem (e.g., perf, algorithm)module: docsRelated to our documentation, both in docs/ and docblocksRelated to our documentation, both in docs/ and docblocksmodule: python frontendFor issues relating to PyTorch's Python frontendFor issues relating to PyTorch's Python frontendtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module