KEMBAR78
Fix "expand: SymIntArrayRef expected to contain only concrete integers" in AOTInductor by ezyang · Pull Request #135933 · pytorch/pytorch · GitHub
Skip to content

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Sep 13, 2024

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Sep 13, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/135933

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit b33d98d with merge base failed to retrieve merge base, please contact dev infra:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

ezyang added a commit that referenced this pull request Sep 13, 2024
…s" in AOTInductor

Internal xref:
https://fb.workplace.com/groups/1075192433118967/permalink/1501860707118802/

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

ghstack-source-id: 46fee3b
Pull Request resolved: #135933
@ezyang
Copy link
Contributor Author

ezyang commented Sep 13, 2024

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Sep 13, 2024
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

facebook-github-bot pushed a commit to pytorch/executorch that referenced this pull request Sep 13, 2024
Summary:
- Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch
- Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists.



Note, include these PyTorch changes for AOTI export:
pytorch/pytorch#135933


Test Plan:
```
>>> import torch
>>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache  # noqa # usort: skip
>>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu")
>>> img = torch.ones([3, 600, 800])
>>> canvas_size = torch.tensor([448, 448])
>>> target_size = torch.tensor([336, 448])
>>> res = x(img, target_size, canvas_size)
>>> res[0].shape
torch.Size([4, 3, 224, 224])
>>> res[1]
tensor([2, 2])
>>>
```

Reviewed By: larryliu0820

Differential Revision: D62651605

Pulled By: lucylq
facebook-github-bot pushed a commit to pytorch/executorch that referenced this pull request Sep 13, 2024
Summary:
- Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch
- Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists.



Note, include these PyTorch changes for AOTI export:
pytorch/pytorch#135933


Test Plan:
```
>>> import torch
>>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache  # noqa # usort: skip
>>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu")
>>> img = torch.ones([3, 600, 800])
>>> canvas_size = torch.tensor([448, 448])
>>> target_size = torch.tensor([336, 448])
>>> res = x(img, target_size, canvas_size)
>>> res[0].shape
torch.Size([4, 3, 224, 224])
>>> res[1]
tensor([2, 2])
>>>
```

Reviewed By: larryliu0820

Differential Revision: D62651605

Pulled By: lucylq
facebook-github-bot pushed a commit to pytorch/executorch that referenced this pull request Sep 13, 2024
Summary:
- Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch
- Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists.



Note, include these PyTorch changes for AOTI export:
pytorch/pytorch#135933


Test Plan:
```
>>> import torch
>>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache  # noqa # usort: skip
>>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu")
>>> img = torch.ones([3, 600, 800])
>>> canvas_size = torch.tensor([448, 448])
>>> target_size = torch.tensor([336, 448])
>>> res = x(img, target_size, canvas_size)
>>> res[0].shape
torch.Size([4, 3, 224, 224])
>>> res[1]
tensor([2, 2])
>>>
```

Reviewed By: larryliu0820

Differential Revision: D62651605

Pulled By: lucylq
facebook-github-bot pushed a commit to pytorch/executorch that referenced this pull request Sep 13, 2024
Summary:
- Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch
- Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists.



Note, include these PyTorch changes for AOTI export:
pytorch/pytorch#135933


Test Plan:
```
>>> import torch
>>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache  # noqa # usort: skip
>>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu")
>>> img = torch.ones([3, 600, 800])
>>> canvas_size = torch.tensor([448, 448])
>>> target_size = torch.tensor([336, 448])
>>> res = x(img, target_size, canvas_size)
>>> res[0].shape
torch.Size([4, 3, 224, 224])
>>> res[1]
tensor([2, 2])
>>>
```

Reviewed By: larryliu0820

Differential Revision: D62651605

Pulled By: lucylq
facebook-github-bot pushed a commit to pytorch/executorch that referenced this pull request Sep 16, 2024
Summary:
- Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch
- Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists.



Note, include these PyTorch changes for AOTI export:
pytorch/pytorch#135933


Test Plan:
```
>>> import torch
>>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache  # noqa # usort: skip
>>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu")
>>> img = torch.ones([3, 600, 800])
>>> canvas_size = torch.tensor([448, 448])
>>> target_size = torch.tensor([336, 448])
>>> res = x(img, target_size, canvas_size)
>>> res[0].shape
torch.Size([4, 3, 224, 224])
>>> res[1]
tensor([2, 2])
>>>
```

Reviewed By: larryliu0820

Differential Revision: D62651605

Pulled By: lucylq
lucylq added a commit to pytorch/executorch that referenced this pull request Sep 16, 2024
Summary:
- Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch
- Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists.

Note, include these PyTorch changes for AOTI export:
pytorch/pytorch#135933

Pull Request resolved: #5350

Test Plan:
```
>>> import torch
>>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache  # noqa # usort: skip
>>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu")
>>> img = torch.ones([3, 600, 800])
>>> canvas_size = torch.tensor([448, 448])
>>> target_size = torch.tensor([336, 448])
>>> res = x(img, target_size, canvas_size)
>>> res[0].shape
torch.Size([4, 3, 224, 224])
>>> res[1]
tensor([2, 2])
>>>
```

Differential Revision: D62651605

Reviewed By: larryliu0820

Pulled By: lucylq
facebook-github-bot pushed a commit to pytorch/executorch that referenced this pull request Sep 16, 2024
Summary:
- Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch
- Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists.



Note, include these PyTorch changes for AOTI export:
pytorch/pytorch#135933


Test Plan:
```
>>> import torch
>>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache  # noqa # usort: skip
>>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu")
>>> img = torch.ones([3, 600, 800])
>>> canvas_size = torch.tensor([448, 448])
>>> target_size = torch.tensor([336, 448])
>>> res = x(img, target_size, canvas_size)
>>> res[0].shape
torch.Size([4, 3, 224, 224])
>>> res[1]
tensor([2, 2])
>>>
```

Reviewed By: larryliu0820

Differential Revision: D62651605

Pulled By: lucylq
lucylq added a commit to pytorch/executorch that referenced this pull request Sep 16, 2024
Summary:
- Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch
- Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists.

Note, include these PyTorch changes for AOTI export:
pytorch/pytorch#135933

Pull Request resolved: #5350

Test Plan:
```
>>> import torch
>>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache  # noqa # usort: skip
>>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu")
>>> img = torch.ones([3, 600, 800])
>>> canvas_size = torch.tensor([448, 448])
>>> target_size = torch.tensor([336, 448])
>>> res = x(img, target_size, canvas_size)
>>> res[0].shape
torch.Size([4, 3, 224, 224])
>>> res[1]
tensor([2, 2])
>>>
```

Reviewed By: larryliu0820

Differential Revision: D62651605

Pulled By: lucylq
facebook-github-bot pushed a commit to pytorch/executorch that referenced this pull request Sep 16, 2024
Summary:
- Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch
- Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists.



Note, include these PyTorch changes for AOTI export:
pytorch/pytorch#135933


Test Plan:
```
>>> import torch
>>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache  # noqa # usort: skip
>>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu")
>>> img = torch.ones([3, 600, 800])
>>> canvas_size = torch.tensor([448, 448])
>>> target_size = torch.tensor([336, 448])
>>> res = x(img, target_size, canvas_size)
>>> res[0].shape
torch.Size([4, 3, 224, 224])
>>> res[1]
tensor([2, 2])
>>>
```

Reviewed By: larryliu0820

Differential Revision: D62651605

Pulled By: lucylq
facebook-github-bot pushed a commit to pytorch/executorch that referenced this pull request Sep 16, 2024
Summary:
- Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch
- Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists.

Note, include these PyTorch changes for AOTI export:
pytorch/pytorch#135933

Pull Request resolved: #5350

Test Plan:
```
>>> import torch
>>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache  # noqa # usort: skip
>>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu")
>>> img = torch.ones([3, 600, 800])
>>> canvas_size = torch.tensor([448, 448])
>>> target_size = torch.tensor([336, 448])
>>> res = x(img, target_size, canvas_size)
>>> res[0].shape
torch.Size([4, 3, 224, 224])
>>> res[1]
tensor([2, 2])
>>>
```

Reviewed By: larryliu0820

Differential Revision: D62651605

Pulled By: lucylq

fbshipit-source-id: bdf5b46033ebbd73d10307ab58219743a73fd6fd
lucylq added a commit to pytorch/executorch that referenced this pull request Sep 17, 2024
Summary:
- Following https://pytorch.org/executorch/stable/kernel-library-custom-aten-kernel.html, use WRAP_TO_ATEN to register preprocess in pytorch
- Create a separate `op_tile_crop_aot.py` that registers the C++ aot library into Python. Inside export_preprocess, use `op_tile_crop_aot.py` instead of `preprocess_custom_ops.py`, which is the pure python lib. Otherwise, we end up loading the C++ library when the python one already exists.

Note, include these PyTorch changes for AOTI export:
pytorch/pytorch#135933

Pull Request resolved: #5350

Test Plan:
```
>>> import torch
>>> from executorch.extension.llm.custom_ops import sdpa_with_kv_cache  # noqa # usort: skip
>>> x = torch._export.aot_load("/home/lfq/local/executorch/aoti_preprocess.so", "cpu")
>>> img = torch.ones([3, 600, 800])
>>> canvas_size = torch.tensor([448, 448])
>>> target_size = torch.tensor([336, 448])
>>> res = x(img, target_size, canvas_size)
>>> res[0].shape
torch.Size([4, 3, 224, 224])
>>> res[1]
tensor([2, 2])
>>>
```

Reviewed By: larryliu0820

Differential Revision: D62651605

Pulled By: lucylq

fbshipit-source-id: bdf5b46033ebbd73d10307ab58219743a73fd6fd
Chao1Han pushed a commit to Chao1Han/pytorch that referenced this pull request Sep 20, 2024
@github-actions github-actions bot deleted the gh/ezyang/2930/head branch October 14, 2024 06:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants