KEMBAR78
GitHub Β· Where software is built
Skip to content

Implement torch.pow for float16 and bfloat16 on CPUΒ #50789

@kurtamohler

Description

@kurtamohler

πŸš€ Feature

Add support for torch.pow with float16 and bfloat16 on CPU

Motivation

Currently, these types are not supported.

>>> torch.rand(10, dtype=torch.float16).pow(1.5)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: "pow" not implemented for 'Half'
>>> torch.rand(10, dtype=torch.bfloat16).pow(1.5)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: "pow" not implemented for 'BFloat16'

float16 and bfloat16 are supported for CUDA, however:

>>> torch.rand(10, dtype=torch.float16, device='cuda').pow(1.5)
tensor([0.4597, 0.6592, 0.6777, 0.0105, 0.7349, 0.0492, 0.5186, 0.1809, 0.4202,
        0.3423], device='cuda:0', dtype=torch.float16)
>>> torch.rand(10, dtype=torch.bfloat16, device='cuda').pow(1.5)
tensor([5.7861e-02, 1.5234e-01, 8.7500e-01, 6.4373e-05, 7.0703e-01, 2.5977e-01,
        3.6133e-01, 6.7578e-01, 1.4648e-01, 8.4839e-03], device='cuda:0',
       dtype=torch.bfloat16)

Metadata

Metadata

Assignees

No one assigned

    Labels

    function requestA request for a new function or the addition of new arguments/modes to an existing function.module: halfRelated to float16 half-precision floatstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions