KEMBAR78
[ONNX] decomp does not preserve custom CompositeImplicitAutograd ops · Issue #150367 · pytorch/pytorch · GitHub
Skip to content

[ONNX] decomp does not preserve custom CompositeImplicitAutograd ops #150367

@borisfom

Description

@borisfom

🚀 The feature, motivation and pitch

I recently was forced to implement some composite custom ops with CompositeImplicitAutograd on top of another, more general custom op, for ONNX/TRT export purposes(the original op was used for both forward and backward and therefore had sequence output type that neither onnxruntime-extensions nor TRT can handle).
The idea was to use custom_translation_table with those composite ops for export.
Now it looks like I would need to implement some additional machinery based on DecompSkip to make that work - as the composite ops are being decomposed during ONNX export.
Can that be handled automatically?
I mean - if I do specify custom_translation table, it should be safe to assume the best course of action is to keep the custom ops specified in the table, from decomposition so that the translation would actually happen.

@xadupre @justinchuby

Alternatives

No response

Additional context

No response

cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4

Metadata

Metadata

Assignees

Labels

module: onnxRelated to torch.onnxtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions