KEMBAR78
Documentation on torch.library.register_autograd and torch.library.register_kernel · Issue #141618 · pytorch/pytorch · GitHub
Skip to content

Documentation on torch.library.register_autograd and torch.library.register_kernel #141618

@Hprairie

Description

@Hprairie

📚 The doc issue

Currently, the interaction of torch.library.register_autograd and torch.library.register_kernel is not well documented. For example, if I want to define a forward and backward pass for a C++ CPU extension and then also a forward and backward pass for a CUDA GPU extension all using torch.library.register_autograd and torch.library.register_kernel is not well defined. Specifically if I want to make an add function with different functions for the fwd/bwd pass for different devices it is not clear how I should do that with custom_ops all under the same name (e.g., "mylib::add").

Suggest a potential alternative/fix

Better documentation on the usage of torch.library.custom_op, torch.library.register_autograd, torch.library.register_kernel would be amazing and would clear up confusion. Thanks!

cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @anjali411 @chauhang @penguinwu @zou3519 @bdhirsh @yf225

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: custom-operatorscustom operators, custom ops, custom-operators, custom-opsmodule: docsRelated to our documentation, both in docs/ and docblocksmodule: libraryRelated to torch.library (for registering ops from Python)triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions