-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Add auto-detection for Intel GPU on Windows #16280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
1048bee
to
2931e50
Compare
@charliermarsh and @geofft could you please help review this PR when you have time. |
a6fe51f
to
f3ee202
Compare
@charliermarsh and @geofft May I know if you could help review this PR. |
Please have patience with the uv team, we only have limited resources for reviewing complex PRs. |
Totally understand, thanks for taking the time to review! Really appreciate the efforts from the uv team. |
crates/uv-torch/src/accelerator.rs
Outdated
use uv_pep440::Version; | ||
use uv_static::EnvVars; | ||
|
||
#[cfg(target_os = "windows")] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this should be #[cfg(windows)]
Thanks, code looks good to me! As discussed in #14386 (comment), we will grab the XPU version on machines with an Intel iGPU that isn't actually useful for GPGPU work, but we said that's probably okay because this version still has CPU support, and detecting other types of GPUs takes precedence. I manually tested this a little bit and the behavior is what I expect. (I tested on two EC2 VMs by modifying the code to match on Microsoft's PCI ID instead of Intel's, and on the one without an additional GPU it downloads the XPU version, and on the one that has an NVIDIA T4 it downloads the CUDA version.) If anyone in the future feels like we should not download the XPU version on machines with insufficiently powerful iGPUs, we can definitely iterate on the behavior, please report a new issue and tag me. |
@geofft Thank you so much for the thorough testing and clear explanation! Really appreciate your detailed validation and feedback. Please feel free to tag me if there’s anything I can help with. |
Thanks @guangyey, appreciate the contribution! |
@charliermarsh Thanks! Happy to contribute. It’s been an interesting and rewarding experience. |
This MR contains the following updates: | Package | Update | Change | |---|---|---| | [astral-sh/uv](https://github.com/astral-sh/uv) | patch | `0.9.3` -> `0.9.5` | MR created with the help of [el-capitano/tools/renovate-bot](https://gitlab.com/el-capitano/tools/renovate-bot). **Proposed changes to behavior should be submitted there as MRs.** --- ### Release Notes <details> <summary>astral-sh/uv (astral-sh/uv)</summary> ### [`v0.9.5`](https://github.com/astral-sh/uv/blob/HEAD/CHANGELOG.md#095) [Compare Source](astral-sh/uv@0.9.4...0.9.5) Released on 2025-10-21. This release contains an upgrade to `astral-tokio-tar`, which addresses a vulnerability in tar extraction on malformed archives with mismatching size information between the ustar header and PAX extensions. While the `astral-tokio-tar` advisory has been graded as "high" due its potential broader impact, the *specific* impact to uv is **low** due to a lack of novel attacker capability. Specifically, uv only processes tar archives from source distributions, which already possess the capability for full arbitrary code execution by design, meaning that an attacker gains no additional capabilities through `astral-tokio-tar`. Regardless, we take the hypothetical risk of parser differentials very seriously. Out of an abundance of caution, we have assigned this upgrade an advisory: <GHSA-w476-p2h3-79g9> ##### Security - Upgrade `astral-tokio-tar` to 0.5.6 to address a parsing differential ([#​16387](astral-sh/uv#16387)) ##### Enhancements - Add required environment marker example to hint ([#​16244](astral-sh/uv#16244)) - Fix typo in MissingTopLevel warning ([#​16351](astral-sh/uv#16351)) - Improve 403 Forbidden error message to indicate package may not exist ([#​16353](astral-sh/uv#16353)) - Add a hint on `uv pip install` failure if the `--system` flag is used to select an externally managed interpreter ([#​16318](astral-sh/uv#16318)) ##### Bug fixes - Fix backtick escaping for PowerShell ([#​16307](astral-sh/uv#16307)) ##### Documentation - Document metadata consistency expectation ([#​15683](astral-sh/uv#15683)) - Remove outdated aarch64 musl note ([#​16385](astral-sh/uv#16385)) ### [`v0.9.4`](https://github.com/astral-sh/uv/blob/HEAD/CHANGELOG.md#094) [Compare Source](astral-sh/uv@0.9.3...0.9.4) Released on 2025-10-17. ##### Enhancements - Add CUDA 13.0 support ([#​16321](astral-sh/uv#16321)) - Add auto-detection for Intel GPU on Windows ([#​16280](astral-sh/uv#16280)) - Implement display of RFC 9457 HTTP error contexts ([#​16199](astral-sh/uv#16199)) ##### Bug fixes - Avoid obfuscating pyx tokens in `uv auth token` output ([#​16345](astral-sh/uv#16345)) </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever MR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this MR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this MR, check this box --- This MR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTEuMSIsInVwZGF0ZWRJblZlciI6IjQxLjE1Mi45IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJSZW5vdmF0ZSBCb3QiXX0=-->
Summary
This PR enables
--torch-backend=auto
to automatically detect Intel GPUs. It follows up on #14386.On Windows, detection is implemented by querying the
Win32_VideoController
class via the WMI crate.Currently, Intel GPUs (XPU) do not depend on specific driver or toolkit versions to determine which PyTorch wheel to use.
Test Plan
Using
uv pip install torch --torch-backend=auto
could install torch-xpu successfully when Intel GPU is detected.