KEMBAR78
[aotinductor] support at::convolution for AOTInductor by chenyang78 · Pull Request #114961 · pytorch/pytorch · GitHub
Skip to content

Conversation

chenyang78
Copy link
Contributor

@chenyang78 chenyang78 commented Dec 1, 2023

This PR adds support to at::convolution for AOTInductor

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Dec 1, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/114961

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit 938b8c6 with merge base af5a3bd (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

BROKEN TRUNK - The following job failed but was present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

});
}

AOTI_TORCH_EXPORT AOTITorchError aoti_torch_convolution(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to codegen these ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think we rely on both codegen and at::convolution extern call, depending on the profiling results:

https://github.com/pytorch/pytorch/blob/main/torch/_inductor/kernel/conv.py#L411-L413

To support the extern call case, we need to include this into the C shim.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think @eellison meant auto-generating shim functions. There were not many interfaces needed initially. Hopefully with more complete decomposition coverage, MM and Conv are the only cases we need to have in the C shim layer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. Yeah, we would definitely go for that direction if we see lots of shim functions.

This PR adds support to at::convolution for AOTInductor

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 kadeng muchulee8 aakhundov ColinPeppler

[ghstack-poisoned]
chenyang78 added a commit that referenced this pull request Dec 1, 2023
This PR adds support to at::convolution for AOTInductor

ghstack-source-id: 4afa503
Pull Request resolved: #114961
});
}

AOTI_TORCH_EXPORT AOTITorchError aoti_torch_convolution(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think @eellison meant auto-generating shim functions. There were not many interfaces needed initially. Hopefully with more complete decomposition coverage, MM and Conv are the only cases we need to have in the C shim layer.

self.writeline(f"AOTI_TORCH_ERROR_CODE_CHECK({shim_fn}({', '.join(args)}));")

def generate_c_shim_extern_kernel_alloc_call(self, extern_kernel, args):
def generate_c_shim_extern_kernel_alloc(self, extern_kernel, args):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could use some refactoring for ir.ExternKernelAlloc and ir.FallbackKernel at some point.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could use some refactoring for ir.ExternKernelAlloc and ir.FallbackKernel at some point.

Yeah, agreed.

int64_t padding_size,
int64_t* dilation_ptr,
int64_t dilation_size,
bool transposed,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use int instead

This PR adds support to at::convolution for AOTInductor

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 kadeng muchulee8 aakhundov ColinPeppler

[ghstack-poisoned]
This PR adds support to at::convolution for AOTInductor

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 kadeng muchulee8 aakhundov ColinPeppler

[ghstack-poisoned]
chenyang78 added a commit that referenced this pull request Dec 2, 2023
This PR adds support to at::convolution for AOTInductor

ghstack-source-id: b8c519f
Pull Request resolved: #114961
@chenyang78
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Dec 3, 2023
@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR needs a release notes: label
If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team Raised by workflow job

@chenyang78
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Comment on lines +240 to +247
int64_t* stride_ptr,
int64_t stride_size,
int64_t* padding_ptr,
int64_t padding_size,
int64_t* dilation_ptr,
int64_t dilation_size,
int transposed,
int64_t* output_padding_ptr,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think all these pointers needed to be decorated with const. piggybacking fix onto #113577

@facebook-github-bot facebook-github-bot deleted the gh/chenyang78/7/head branch December 6, 2023 15:28
dmenig pushed a commit to dmenig/pytorch that referenced this pull request Dec 21, 2023
This PR adds support to at::convolution for AOTInductor

Pull Request resolved: pytorch#114961
Approved by: https://github.com/desertfire
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants