KEMBAR78
Upgrade to DLPack 1.0. by ysiraichi · Pull Request #145000 · pytorch/pytorch · GitHub
Skip to content

Conversation

@ysiraichi
Copy link
Collaborator

@ysiraichi ysiraichi commented Jan 16, 2025

Stack from ghstack (oldest at bottom):

This PR makes the necessary changes in order to upgrade PyTorch DLPack
support to version 1.0. In summary, we add support for the following:

  • Support both DLManagedTensor and DLManagedTensorVersioned when
    producing and consuming DLPack capsules
  • New parameter for __dlpack__ method: max_version
  • Version checks:
    • Fallback to old implementation if no max_version or if version
      lower than 1.0
    • Check that the to-be-consumed capsule is of version up to 1.X

In order to accommodate these new specifications, this PR adds the
following main changes:

  • torch._C._to_dlpack_versioned Python API (Module.cpp): new Python
    API for creating a versioned DLPack capsule (called by __dlpack__
    method)
  • DLPackTraits<T> class (DLConvertor.h): select the correct
    traits (e.g. capsule name, conversion functions) depending on which
    DLPack tensor class is being used
  • toDLPackImpl<T> function (DLConvertor.cpp): populates the
    common fields of both classes
  • fromDLPackImpl<T> function (DLConvertor.cpp): constructs a tensor
    from a DLPAck capsule
  • fillVersion<T> function (DLConvertor.cpp): populates the version
    field for DLManagedTensorVersioned (no-op for DLManagedTensor)
  • tensor_fromDLPackImpl<T> function (tensor_new.cpp): outer function
    for constructing a tensor out of a DLPack capsule that also marks the
    capsule as used

cc @ezyang @gchanan

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Jan 16, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/145000

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 1 Pending

As of commit c95743c with merge base 2dfc0e3 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

ysiraichi added a commit that referenced this pull request Jan 16, 2025
This PR makes the necessary changes in order to upgrade PyTorch DLPack
support to version 1.0. In summary, we add support for the following:

- Support both `DLManagedTensor` and `DLManagedTensorVersioned` when
  producing and consuming DLPack capsules
- New parameter for `__dlpack__` method: `max_version`
- Version checks:
    - Fallback to old implementation if no `max_version` or if version
      lower than 1.0
    - Check that the to-be-consumed capsule is of version up to 1.X

In order to accommodate these new specifications, this PR adds the
following main changes:

- `torch._C._to_dlpack_versioned` Python API (Module.cpp): new Python
  API for creating a versioned DLPack capsule (called by `__dlpack__`
  method)
- `DLPackTraits<T>` class (DLConvertor.h): select the correct
  capsule name depending on which DLPack tensor class is being used
- `toDLPackImpl<T>` function (DLConvertor.cpp): populates the
  common fields of both classes
- `fillVersion<T>` function (DLConvertor.cpp): populates the version
  field for `DLManagedTensorVersioned` (no-op for `DLManagedTensor`)
- `fromDLPackImpl<T>` function (tensor_new.cpp): common function for
  creating an `at::Tensor` for both classes, leaving the possible
  version check for its caller

ghstack-source-id: 3ca1169
Pull Request resolved: #145000
[ghstack-poisoned]
@ysiraichi ysiraichi added module: dlpack release notes: python_frontend python frontend release notes category labels Feb 1, 2025
[ghstack-poisoned]
ysiraichi added a commit that referenced this pull request Feb 1, 2025
This PR makes the necessary changes in order to upgrade PyTorch DLPack
support to version 1.0. In summary, we add support for the following:

- Support both `DLManagedTensor` and `DLManagedTensorVersioned` when
  producing and consuming DLPack capsules
- New parameter for `__dlpack__` method: `max_version`
- Version checks:
    - Fallback to old implementation if no `max_version` or if version
      lower than 1.0
    - Check that the to-be-consumed capsule is of version up to 1.X

In order to accommodate these new specifications, this PR adds the
following main changes:

- `torch._C._to_dlpack_versioned` Python API (Module.cpp): new Python
  API for creating a versioned DLPack capsule (called by `__dlpack__`
  method)
- `DLPackTraits<T>` class (DLConvertor.h): select the correct
  capsule name depending on which DLPack tensor class is being used
- `toDLPackImpl<T>` function (DLConvertor.cpp): populates the
  common fields of both classes
- `fromDLPackImpl<T>` function (DLConvertor.cpp): constructs a tensor
  from a DLPAck capsule
- `fillVersion<T>` function (DLConvertor.cpp): populates the version
  field for `DLManagedTensorVersioned` (no-op for `DLManagedTensor`)
- `tensor_fromDLPackImpl<T>` function (tensor_new.cpp): outer function
  for constructing a tensor out of a DLPack capsule that also marks the
  capsule as used

ghstack-source-id: e58ba67
Pull Request resolved: #145000
[ghstack-poisoned]
@ysiraichi ysiraichi requested review from albanD and rgommers February 2, 2025 16:12
[ghstack-poisoned]
ysiraichi added a commit that referenced this pull request Feb 2, 2025
This PR makes the necessary changes in order to upgrade PyTorch DLPack
support to version 1.0. In summary, we add support for the following:

- Support both `DLManagedTensor` and `DLManagedTensorVersioned` when
  producing and consuming DLPack capsules
- New parameter for `__dlpack__` method: `max_version`
- Version checks:
    - Fallback to old implementation if no `max_version` or if version
      lower than 1.0
    - Check that the to-be-consumed capsule is of version up to 1.X

In order to accommodate these new specifications, this PR adds the
following main changes:

- `torch._C._to_dlpack_versioned` Python API (Module.cpp): new Python
  API for creating a versioned DLPack capsule (called by `__dlpack__`
  method)
- `DLPackTraits<T>` class (DLConvertor.h): select the correct
  capsule name depending on which DLPack tensor class is being used
- `toDLPackImpl<T>` function (DLConvertor.cpp): populates the
  common fields of both classes
- `fromDLPackImpl<T>` function (DLConvertor.cpp): constructs a tensor
  from a DLPAck capsule
- `fillVersion<T>` function (DLConvertor.cpp): populates the version
  field for `DLManagedTensorVersioned` (no-op for `DLManagedTensor`)
- `tensor_fromDLPackImpl<T>` function (tensor_new.cpp): outer function
  for constructing a tensor out of a DLPack capsule that also marks the
  capsule as used

ghstack-source-id: 063107b
Pull Request resolved: #145000
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change sounds ok but what is the expectations in terms of BC when interracting with libraries that didn't upgrade to latest dlpack yet? Is that ok for all Tensors to be of the new version?

TORCH_API Tensor fromDLPack(DLManagedTensor* src);
TORCH_API Tensor
fromDLPack(DLManagedTensor* src, std::function<void(void*)> deleter);
TORCH_API DLManagedTensorVersioned* toDLPack(const Tensor& src);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this API used by C++ libraries? This would be a BC-breaking change for these users right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's being used by PyTorch/XLA. Yes, it's definitely BC-breaking. I was thinking that, since DLPack 1.0 should be the new default, the old version should have the name with a suffix. However, now that you brought this up, not being BC-breaking sounds more important.

In summary, I will change the names so that we are not BC-breaking.

TORCH_API DLManagedTensor* toDLPackUnversioned(const Tensor& src);
TORCH_API Tensor fromDLPack(
DLManagedTensorVersioned* src,
std::optional<std::function<void(void*)>> deleter = std::nullopt);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std::function is optional by default and doesn't require std::optional right?

solve,
)
from torch.utils.dlpack import from_dlpack, to_dlpack
from torch.utils.dlpack import from_dlpack, to_dlpack, to_dlpack_unversioned
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I understand this matches current code, I don't think we want to have this as a new top level API? Being part of torch.utils.dlpack is enough?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That comment sounds right - I don't think it should be needed. For the introduction strategy to a versioned protocol, see the explanation and prototype code (if max_version is None: ....) at https://data-apis.org/array-api/latest/API_specification/generated/array_api.array.__dlpack__.html

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's useful/needed to have a Python function to use here so that can be called from within Tensor.__dlpack__, then making it a private function by prepending an underscore should be fine I think.

METH_NOARGS,
nullptr},
{"_to_dlpack", THPModule_toDLPack, METH_O, nullptr},
{"_to_dlpack_unversioned", THPModule_toDLPackUnversioned, METH_O, nullptr},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we still need this one?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does look necessary to me. For DLPack to change the ABI once, there's a dance that needs doing to be not-super-disruptive: continue returning the old (0.8) version, unless the consumer indicates it can handle the new (1.X) version by passing in max_version=(1, 0) (or (1, x) in the future).

try:
# Try running __dlpack__ while specifying `max_version` argument.
dlpack = ext_tensor.__dlpack__(**kwargs)
except TypeError:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it guaranteed that they will fail with TypeError?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That should be the case - it's the standard error in Python for using a non-existing keyword.

Copy link
Collaborator

@rgommers rgommers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for tackling this @ysiraichi! This looks like a good start. A few high-level comments:

  • As noted inline, __dlpack__ gained 3 new keywords, so far this adds only max_version. I suspect it's safer to add all at once, because other libraries are probably going to assume that if max_version is present, the 1.0 support is complete.
  • It would be good to have new Python-level tests in test/test_dlpack.py. That will also make the changes in logic easier to review.
    • There's one testing TODO for a very old numpy version there that may be nice to take along:
      # TODO: add interchange tests once NumPy 1.22 (dlpack support) is required
  • I think this is inactionable right now, but adding for completeness: DLPack gained a new DLPACK_FLAG_BITMASK_READ_ONLY field, which in the future can feed into PyTorch's copy-on-write (COW) feature.

try:
# Try running __dlpack__ while specifying `max_version` argument.
dlpack = ext_tensor.__dlpack__(**kwargs)
except TypeError:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That should be the case - it's the standard error in Python for using a non-existing keyword.

solve,
)
from torch.utils.dlpack import from_dlpack, to_dlpack
from torch.utils.dlpack import from_dlpack, to_dlpack, to_dlpack_unversioned
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That comment sounds right - I don't think it should be needed. For the introduction strategy to a versioned protocol, see the explanation and prototype code (if max_version is None: ....) at https://data-apis.org/array-api/latest/API_specification/generated/array_api.array.__dlpack__.html

torch/_tensor.py Outdated
max_version (tuple[int, int] or None): An optional Python tuple with
2 integers, representing the maximum version the caller supports. If
None is passed, then PyTorch will fallback to DLPack 0.X, where versions
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rephrase as "If None (default), PyTorch will use DLPack 0.8".

solve,
)
from torch.utils.dlpack import from_dlpack, to_dlpack
from torch.utils.dlpack import from_dlpack, to_dlpack, to_dlpack_unversioned
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's useful/needed to have a Python function to use here so that can be called from within Tensor.__dlpack__, then making it a private function by prepending an underscore should be fine I think.

torch/_tensor.py Outdated
__torch_dispatch__ = _C._disabled_torch_dispatch_impl

def __dlpack__(self, stream=None):
def __dlpack__(self, stream=None, max_version=None):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that there are also new dl_device and copy keywords (API docs).

METH_NOARGS,
nullptr},
{"_to_dlpack", THPModule_toDLPack, METH_O, nullptr},
{"_to_dlpack_unversioned", THPModule_toDLPackUnversioned, METH_O, nullptr},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does look necessary to me. For DLPack to change the ABI once, there's a dance that needs doing to be not-super-disruptive: continue returning the old (0.8) version, unless the consumer indicates it can handle the new (1.X) version by passing in max_version=(1, 0) (or (1, x) in the future).

@ysiraichi
Copy link
Collaborator Author

@albanD @rgommers Since you have already reviewed this PR, I was thinking about adding the new __dlpack__ keywords + tests in a new PR stacked on top of this one. There are 2 reasons for that:

  • It would be easier to review the new changes
  • We could merge the whole stack at once when that PR is ready

Let me know if that works for you.

@rgommers
Copy link
Collaborator

rgommers commented Feb 7, 2025

@ysiraichi a stack with two separate PRs, with the dl_device and copy keywords in the second PR, sounds perfectly fine to me. Merging the changes in one go will be nice, but functionally those two keywords don't interact with max_version, so it'll be a clean split.

@albanD
Copy link
Collaborator

albanD commented Feb 7, 2025

Separate PR on the stack sounds ok.
Note that you won't be able to land both together though if the bottom PR's CI is red though, they must all pass CI

[ghstack-poisoned]
ysiraichi added a commit that referenced this pull request Feb 7, 2025
This PR makes the necessary changes in order to upgrade PyTorch DLPack
support to version 1.0. In summary, we add support for the following:

- Support both `DLManagedTensor` and `DLManagedTensorVersioned` when
  producing and consuming DLPack capsules
- New parameter for `__dlpack__` method: `max_version`
- Version checks:
    - Fallback to old implementation if no `max_version` or if version
      lower than 1.0
    - Check that the to-be-consumed capsule is of version up to 1.X

In order to accommodate these new specifications, this PR adds the
following main changes:

- `torch._C._to_dlpack_versioned` Python API (Module.cpp): new Python
  API for creating a versioned DLPack capsule (called by `__dlpack__`
  method)
- `DLPackTraits<T>` class (DLConvertor.h): select the correct
  capsule name depending on which DLPack tensor class is being used
- `toDLPackImpl<T>` function (DLConvertor.cpp): populates the
  common fields of both classes
- `fromDLPackImpl<T>` function (DLConvertor.cpp): constructs a tensor
  from a DLPAck capsule
- `fillVersion<T>` function (DLConvertor.cpp): populates the version
  field for `DLManagedTensorVersioned` (no-op for `DLManagedTensor`)
- `tensor_fromDLPackImpl<T>` function (tensor_new.cpp): outer function
  for constructing a tensor out of a DLPack capsule that also marks the
  capsule as used

ghstack-source-id: 2b774ce
Pull Request resolved: #145000
@leofang
Copy link
Contributor

leofang commented Mar 11, 2025

Gentle ping @ysiraichi, any chance we can get this work wrapped up in the near future? Note that we just tagged DLPack v1.1: https://github.com/dmlc/dlpack/releases/tag/v1.1, which added a few more dtype enums.

@ysiraichi
Copy link
Collaborator Author

Yeah. I'm still working on this. Have 2 more PRs to be opened.
Haven't had the time to get back to them, though. Will do so soon.

[ghstack-poisoned]
@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a revert job. Check the current status here.
Questions? Feedback? Please reach out to the PyTorch DevX Team

pytorchmergebot added a commit that referenced this pull request Jun 20, 2025
This reverts commit 6e185c5.

Reverted #145000 on behalf of https://github.com/atalman due to failing internal tests ([comment](#145000 (comment)))
@pytorchmergebot
Copy link
Collaborator

@ysiraichi your PR has been successfully reverted.

@pytorchmergebot pytorchmergebot added Reverted ci-no-td Do not run TD on this PR labels Jun 20, 2025
@ysiraichi
Copy link
Collaborator Author

@atalman Could you share the internal errors?

@atalman
Copy link
Contributor

atalman commented Jun 20, 2025

I believe this needs to be imported internally and landed internally since we are missing some internal changes. Error is during compilation, I believe there is a code thats using older constructs:

error: unknown type name 'DLManagedTensorVersioned'
   15 | TORCH_API DLManagedTensorVersioned* toDLPackVersioned(const Tensor& src);
...
: error: unknown type name 'DLManagedTensorVersioned'
   19 |     DLManagedTensorVersioned* src,
      |     ^
....
rror: use of undeclared identifier 'DLManagedTensorVersioned'
   52 | struct DLPackTraits<DLManagedTensorVersioned> {
      |      

@ysiraichi
Copy link
Collaborator Author

I see. DLManagedTensorVersioned is actually a struct that was introduced in this PR, and lives inside dlpack.h. If there's anything I can do on my side, let me know.

@albanD
Copy link
Collaborator

albanD commented Jun 20, 2025

This is an executorch build issue for some reason.
@mergennachin do you know what is causing this?

@ezyang
Copy link
Contributor

ezyang commented Jun 23, 2025

LLM proposes

diff --git a/fbcode/caffe2/aten/src/ATen/dlpack.h b/fbcode/caffe2/aten/src/ATen/dlpack.h
--- a/fbcode/caffe2/aten/src/ATen/dlpack.h
+++ b/fbcode/caffe2/aten/src/ATen/dlpack.h
@@ -292,7 +292,7 @@
  *
  * \note This is the current standard DLPack exchange data structure.
  */
-struct DLManagedTensorVersioned {
+typedef struct DLManagedTensorVersioned {
   /*!
    * \brief The API and ABI version of the current managed Tensor
    */
@@ -326,7 +326,7 @@
   uint64_t flags;
   /*! \brief DLTensor which is being memory managed */
   DLTensor dl_tensor;
-};
+} DLManagedTensorVersioned;
 
 #ifdef __cplusplus
 }  // DLPACK_EXTERN_C

Which is kind of sus, not sure if this fixes the problem.

NARRATOR: It did not work.

TORCH_API ScalarType toScalarType(const DLDataType& dtype);
TORCH_API DLManagedTensor* toDLPack(const Tensor& src);
TORCH_API Tensor fromDLPack(DLManagedTensor* src);
TORCH_API DLManagedTensorVersioned* toDLPackVersioned(const Tensor& src);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fixes internal failures:

Suggested change
TORCH_API DLManagedTensorVersioned* toDLPackVersioned(const Tensor& src);
TORCH_API struct DLManagedTensorVersioned* toDLPackVersioned(const Tensor& src);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @ysiraichi could you please try suggested change. I can help import this PR internally and make sure the signal is green before merging.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

This PR makes the necessary changes in order to upgrade PyTorch DLPack
support to version 1.0. In summary, we add support for the following:

- Support both `DLManagedTensor` and `DLManagedTensorVersioned` when
  producing and consuming DLPack capsules
- New parameter for `__dlpack__` method: `max_version`
- Version checks:
    - Fallback to old implementation if no `max_version` or if version
      lower than 1.0
    - Check that the to-be-consumed capsule is of version up to 1.X

In order to accommodate these new specifications, this PR adds the
following main changes:

- `torch._C._to_dlpack_versioned` Python API (Module.cpp): new Python
API for creating a versioned DLPack capsule (called by `__dlpack__`
method)
- `DLPackTraits<T>` class (DLConvertor.h): select the correct
traits (e.g. capsule name, conversion functions) depending on which 
DLPack tensor class is being used
- `toDLPackImpl<T>` function (DLConvertor.cpp): populates the
common fields of both classes
- `fromDLPackImpl<T>` function (DLConvertor.cpp): constructs a tensor
from a DLPAck capsule
- `fillVersion<T>` function (DLConvertor.cpp): populates the version
field for `DLManagedTensorVersioned` (no-op for `DLManagedTensor`)
- `tensor_fromDLPackImpl<T>` function (tensor_new.cpp): outer function
for constructing a tensor out of a DLPack capsule that also marks the
capsule as used

cc ezyang gchanan

[ghstack-poisoned]
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!
I don't have a devserver on hand, so trying to go with the regular land!

@albanD
Copy link
Collaborator

albanD commented Jun 30, 2025

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: trunk / win-vs2022-cpu-py3 / test (default, 3, 3, windows.4xlarge.nonephemeral)

Details for Dev Infra team Raised by workflow job

@albanD
Copy link
Collaborator

albanD commented Jun 30, 2025

@pytorchbot merge -i

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged while ignoring the following 1 checks: trunk / win-vs2022-cpu-py3 / test (default, 3, 3, windows.4xlarge.nonephemeral)

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@albanD
Copy link
Collaborator

albanD commented Jul 3, 2025

This landed smoothly, thank @ysiraichi for the change. We can merge the rest of the stack now!

@github-actions github-actions bot deleted the gh/ysiraichi/80/head branch August 3, 2025 02:20
@tqchen
Copy link
Contributor

tqchen commented Aug 29, 2025

Just want to followup with #145000 (comment) that we have 1.1 that comes with F8 and F4 data types, would be great to get help on landing support for them, hopefully as a matter of updating

  • // TODO(#146647): use macro here instead of spelling out each shell dtype
    case ScalarType::Float8_e5m2:
    case ScalarType::Float8_e5m2fnuz:
    case ScalarType::Float8_e4m3fn:
    case ScalarType::Float8_e4m3fnuz:
    case ScalarType::Float8_e8m0fnu:
    TORCH_CHECK_BUFFER(false, "float8 types are not supported by dlpack");
    break;
    case ScalarType::Float4_e2m1fn_x2:
    TORCH_CHECK_BUFFER(false, "float4 types are not supported by dlpack");
    break;
    case ScalarType::QInt8:
    case ScalarType::QUInt8:
    case ScalarType::QInt32:
    case ScalarType::QUInt4x2:
    case ScalarType::QUInt2x4:
    TORCH_CHECK_BUFFER(false, "QUInt/QInt types are not supported by dlpack");
    break;
    case ScalarType::Bits1x8:
    case ScalarType::Bits2x4:
    case ScalarType::Bits4x2:
    case ScalarType::Bits8:
    case ScalarType::Bits16:
    TORCH_CHECK_BUFFER(false, "Bit types are not supported by dlpack");
    break;
  • to map to https://github.com/dmlc/dlpack/blob/main/include/dlpack/dlpack.h#L162

@ysiraichi
Copy link
Collaborator Author

I'm sorry, but I'm not able to work on this anymore. Feel free to pick this up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci-no-td Do not run TD on this PR ciflow/trunk Trigger trunk jobs on your pull request Merged module: bc-breaking Related to a BC-breaking change module: dlpack open source release notes: python_frontend python frontend release notes category Reverted topic: bc breaking topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.