-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Description
📚 The doc issue
Currently, the interaction of torch.library.register_autograd and torch.library.register_kernel is not well documented. For example, if I want to define a forward and backward pass for a C++ CPU extension and then also a forward and backward pass for a CUDA GPU extension all using torch.library.register_autograd and torch.library.register_kernel is not well defined. Specifically if I want to make an add function with different functions for the fwd/bwd pass for different devices it is not clear how I should do that with custom_ops all under the same name (e.g., "mylib::add").
Suggest a potential alternative/fix
Better documentation on the usage of torch.library.custom_op, torch.library.register_autograd, torch.library.register_kernel would be amazing and would clear up confusion. Thanks!
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @anjali411 @chauhang @penguinwu @zou3519 @bdhirsh @yf225