-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Build vLLM nightly wheels #162000
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build vLLM nightly wheels #162000
Conversation
Signed-off-by: Huy Do <huydhn@gmail.com>
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/162000
Note: Links to docs will display an error until the docs builds have been completed. ⏳ 1 Pending, 2 Unrelated FailuresAs of commit 754cd75 with merge base 8ec551b ( FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
|
cc @atalman The upload looks fine I think with vLLM cu12[89] wheels go to https://download.pytorch.org/whl/nightly/cu12[89] accordingly, i.e. https://github.com/pytorch/pytorch/actions/runs/17456147314/job/49575617824#step:10:23 |
Signed-off-by: Huy Do <huydhn@gmail.com>
|
Per the comment from @atalman, let's include a build for CUDA 13.0. We can keep CUDA 12.9 build for now and remove it later |
|
Add a note here for future reference when adding CUDA 13.0, building vLLM with CUDA 13.0 is currently failing https://github.com/pytorch/pytorch/actions/runs/17510237984/job/49740804047. The error indicates that vLLM needs to follow https://nvidia.github.io/cccl/cccl/3.0_migration_guide.html to address some breaking changes in CUDA 13.0. This is similar to how #153373 is for PyTorch. cc @atalman |
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
|
@pytorchbot drci |
|
@pytorchbot merge -f 'Ready to land' |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
This uses the same approach as building triton wheel where we publish a nightly wheel for vLLM whenever its pinned commit is updated. The key change is to use `pytorch/manylinux2_28-builder` as the base image to build vLLM, so there are a couple of changes on the vLLM Dockerfile used by lumen_cli 1. `pytorch/manylinux2_28-builder` is RedHat instead of Debian-based, so no apt-get 2. Fix a bug in `.github/actions/build-external-packages/action.yml` where `CUDA_VERSION` is not set correctly, preventing CUDA 12.9 build 3. Fix a bug in `.github/actions/build-external-packages/action.yml` where `TORCH_WHEELS_PATH` is not set correctly and always defaulted to `dist` 4. In vLLM Dockerfile, use the correct index for the selected CUDA version, i.e. https://download.pytorch.org/whl/nightly/cu12[89] for CUDA 12.[89] 5. Install torch, vision, audio in one command. Unlike the CI image `pytorch-linux-jammy-cuda12.8-cudnn9-py3.12-gcc11-vllm`, `pytorch/manylinux2_28-builder` doesn't have any torch dependencies preinstalled 6. Bump xformers version to 0.0.32.post2 now that PyTorch 2.8.0 has been landed on vLLM We need to prepare 3 wheels for vLLM, xformers, and flashinfer-python. And I rename them in the same convention as PyTorch nightlies `MAJOR.MINOR.PATCH.devYYYYMMDD` so that vLLM nightlies will work with torch nightlies on the same date. ### Usage * Install latest nightlies ``` pip install --pre torch torchvision torchaudio vllm xformers flashinfer_python \ --index-url https://download.pytorch.org/whl/nightly/cu129 ``` * Install a specific version ``` pip install --pre torch==2.9.0.dev20250903 torchvision torchaudio \ vllm==1.0.0.dev20250903 \ xformers=0.0.33.dev20250903 \ flashinfer_python=0.2.14.dev20250903 \ --index-url https://download.pytorch.org/whl/nightly/cu129 ``` Pull Request resolved: pytorch#162000 Approved by: https://github.com/atalman
A follow-up after pytorch/pytorch#162000 to surface these wheels on PyTorch index Signed-off-by: Huy Do <huydhn@gmail.com>
I suspected that I would need to repack vLLM wheels from #162000 because I renamed the wheel, and it turns out to be true. The error is as follows: ``` $ uv pip install --pre xformers --index-url https://download.pytorch.org/whl/nightly/cu129 Using Python 3.12.11+meta environment at: venv/py3.12 Resolved 28 packages in 759ms error: Failed to install: xformers-0.0.33.dev20250901+cu129-cp39-abi3-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (xformers==0.0.33.dev20250901+cu129) Caused by: Wheel version does not match filename: 0.0.33+5d4b92a5.d20250907 != 0.0.33.dev20250901+cu129 ``` Pull Request resolved: #162371 Approved by: https://github.com/atalman
This uses the same approach as building triton wheel where we publish a nightly wheel for vLLM whenever its pinned commit is updated. The key change is to use `pytorch/manylinux2_28-builder` as the base image to build vLLM, so there are a couple of changes on the vLLM Dockerfile used by lumen_cli 1. `pytorch/manylinux2_28-builder` is RedHat instead of Debian-based, so no apt-get 2. Fix a bug in `.github/actions/build-external-packages/action.yml` where `CUDA_VERSION` is not set correctly, preventing CUDA 12.9 build 3. Fix a bug in `.github/actions/build-external-packages/action.yml` where `TORCH_WHEELS_PATH` is not set correctly and always defaulted to `dist` 4. In vLLM Dockerfile, use the correct index for the selected CUDA version, i.e. https://download.pytorch.org/whl/nightly/cu12[89] for CUDA 12.[89] 5. Install torch, vision, audio in one command. Unlike the CI image `pytorch-linux-jammy-cuda12.8-cudnn9-py3.12-gcc11-vllm`, `pytorch/manylinux2_28-builder` doesn't have any torch dependencies preinstalled 6. Bump xformers version to 0.0.32.post2 now that PyTorch 2.8.0 has been landed on vLLM We need to prepare 3 wheels for vLLM, xformers, and flashinfer-python. And I rename them in the same convention as PyTorch nightlies `MAJOR.MINOR.PATCH.devYYYYMMDD` so that vLLM nightlies will work with torch nightlies on the same date. ### Usage * Install latest nightlies ``` pip install --pre torch torchvision torchaudio vllm xformers flashinfer_python \ --index-url https://download.pytorch.org/whl/nightly/cu129 ``` * Install a specific version ``` pip install --pre torch==2.9.0.dev20250903 torchvision torchaudio \ vllm==1.0.0.dev20250903 \ xformers=0.0.33.dev20250903 \ flashinfer_python=0.2.14.dev20250903 \ --index-url https://download.pytorch.org/whl/nightly/cu129 ``` Pull Request resolved: pytorch#162000 Approved by: https://github.com/atalman
I suspected that I would need to repack vLLM wheels from pytorch#162000 because I renamed the wheel, and it turns out to be true. The error is as follows: ``` $ uv pip install --pre xformers --index-url https://download.pytorch.org/whl/nightly/cu129 Using Python 3.12.11+meta environment at: venv/py3.12 Resolved 28 packages in 759ms error: Failed to install: xformers-0.0.33.dev20250901+cu129-cp39-abi3-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (xformers==0.0.33.dev20250901+cu129) Caused by: Wheel version does not match filename: 0.0.33+5d4b92a5.d20250907 != 0.0.33.dev20250901+cu129 ``` Pull Request resolved: pytorch#162371 Approved by: https://github.com/atalman
This uses the same approach as building triton wheel where we publish a nightly wheel for vLLM whenever its pinned commit is updated. The key change is to use `pytorch/manylinux2_28-builder` as the base image to build vLLM, so there are a couple of changes on the vLLM Dockerfile used by lumen_cli 1. `pytorch/manylinux2_28-builder` is RedHat instead of Debian-based, so no apt-get 2. Fix a bug in `.github/actions/build-external-packages/action.yml` where `CUDA_VERSION` is not set correctly, preventing CUDA 12.9 build 3. Fix a bug in `.github/actions/build-external-packages/action.yml` where `TORCH_WHEELS_PATH` is not set correctly and always defaulted to `dist` 4. In vLLM Dockerfile, use the correct index for the selected CUDA version, i.e. https://download.pytorch.org/whl/nightly/cu12[89] for CUDA 12.[89] 5. Install torch, vision, audio in one command. Unlike the CI image `pytorch-linux-jammy-cuda12.8-cudnn9-py3.12-gcc11-vllm`, `pytorch/manylinux2_28-builder` doesn't have any torch dependencies preinstalled 6. Bump xformers version to 0.0.32.post2 now that PyTorch 2.8.0 has been landed on vLLM We need to prepare 3 wheels for vLLM, xformers, and flashinfer-python. And I rename them in the same convention as PyTorch nightlies `MAJOR.MINOR.PATCH.devYYYYMMDD` so that vLLM nightlies will work with torch nightlies on the same date. ### Usage * Install latest nightlies ``` pip install --pre torch torchvision torchaudio vllm xformers flashinfer_python \ --index-url https://download.pytorch.org/whl/nightly/cu129 ``` * Install a specific version ``` pip install --pre torch==2.9.0.dev20250903 torchvision torchaudio \ vllm==1.0.0.dev20250903 \ xformers=0.0.33.dev20250903 \ flashinfer_python=0.2.14.dev20250903 \ --index-url https://download.pytorch.org/whl/nightly/cu129 ``` Pull Request resolved: pytorch#162000 Approved by: https://github.com/atalman
I suspected that I would need to repack vLLM wheels from pytorch#162000 because I renamed the wheel, and it turns out to be true. The error is as follows: ``` $ uv pip install --pre xformers --index-url https://download.pytorch.org/whl/nightly/cu129 Using Python 3.12.11+meta environment at: venv/py3.12 Resolved 28 packages in 759ms error: Failed to install: xformers-0.0.33.dev20250901+cu129-cp39-abi3-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (xformers==0.0.33.dev20250901+cu129) Caused by: Wheel version does not match filename: 0.0.33+5d4b92a5.d20250907 != 0.0.33.dev20250901+cu129 ``` Pull Request resolved: pytorch#162371 Approved by: https://github.com/atalman
This uses the same approach as building triton wheel where we publish a nightly wheel for vLLM whenever its pinned commit is updated. The key change is to use `pytorch/manylinux2_28-builder` as the base image to build vLLM, so there are a couple of changes on the vLLM Dockerfile used by lumen_cli 1. `pytorch/manylinux2_28-builder` is RedHat instead of Debian-based, so no apt-get 2. Fix a bug in `.github/actions/build-external-packages/action.yml` where `CUDA_VERSION` is not set correctly, preventing CUDA 12.9 build 3. Fix a bug in `.github/actions/build-external-packages/action.yml` where `TORCH_WHEELS_PATH` is not set correctly and always defaulted to `dist` 4. In vLLM Dockerfile, use the correct index for the selected CUDA version, i.e. https://download.pytorch.org/whl/nightly/cu12[89] for CUDA 12.[89] 5. Install torch, vision, audio in one command. Unlike the CI image `pytorch-linux-jammy-cuda12.8-cudnn9-py3.12-gcc11-vllm`, `pytorch/manylinux2_28-builder` doesn't have any torch dependencies preinstalled 6. Bump xformers version to 0.0.32.post2 now that PyTorch 2.8.0 has been landed on vLLM We need to prepare 3 wheels for vLLM, xformers, and flashinfer-python. And I rename them in the same convention as PyTorch nightlies `MAJOR.MINOR.PATCH.devYYYYMMDD` so that vLLM nightlies will work with torch nightlies on the same date. ### Usage * Install latest nightlies ``` pip install --pre torch torchvision torchaudio vllm xformers flashinfer_python \ --index-url https://download.pytorch.org/whl/nightly/cu129 ``` * Install a specific version ``` pip install --pre torch==2.9.0.dev20250903 torchvision torchaudio \ vllm==1.0.0.dev20250903 \ xformers=0.0.33.dev20250903 \ flashinfer_python=0.2.14.dev20250903 \ --index-url https://download.pytorch.org/whl/nightly/cu129 ``` Pull Request resolved: pytorch#162000 Approved by: https://github.com/atalman
I suspected that I would need to repack vLLM wheels from pytorch#162000 because I renamed the wheel, and it turns out to be true. The error is as follows: ``` $ uv pip install --pre xformers --index-url https://download.pytorch.org/whl/nightly/cu129 Using Python 3.12.11+meta environment at: venv/py3.12 Resolved 28 packages in 759ms error: Failed to install: xformers-0.0.33.dev20250901+cu129-cp39-abi3-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (xformers==0.0.33.dev20250901+cu129) Caused by: Wheel version does not match filename: 0.0.33+5d4b92a5.d20250907 != 0.0.33.dev20250901+cu129 ``` Pull Request resolved: pytorch#162371 Approved by: https://github.com/atalman
This uses the same approach as building triton wheel where we publish a nightly wheel for vLLM whenever its pinned commit is updated. The key change is to use `pytorch/manylinux2_28-builder` as the base image to build vLLM, so there are a couple of changes on the vLLM Dockerfile used by lumen_cli 1. `pytorch/manylinux2_28-builder` is RedHat instead of Debian-based, so no apt-get 2. Fix a bug in `.github/actions/build-external-packages/action.yml` where `CUDA_VERSION` is not set correctly, preventing CUDA 12.9 build 3. Fix a bug in `.github/actions/build-external-packages/action.yml` where `TORCH_WHEELS_PATH` is not set correctly and always defaulted to `dist` 4. In vLLM Dockerfile, use the correct index for the selected CUDA version, i.e. https://download.pytorch.org/whl/nightly/cu12[89] for CUDA 12.[89] 5. Install torch, vision, audio in one command. Unlike the CI image `pytorch-linux-jammy-cuda12.8-cudnn9-py3.12-gcc11-vllm`, `pytorch/manylinux2_28-builder` doesn't have any torch dependencies preinstalled 6. Bump xformers version to 0.0.32.post2 now that PyTorch 2.8.0 has been landed on vLLM We need to prepare 3 wheels for vLLM, xformers, and flashinfer-python. And I rename them in the same convention as PyTorch nightlies `MAJOR.MINOR.PATCH.devYYYYMMDD` so that vLLM nightlies will work with torch nightlies on the same date. ### Usage * Install latest nightlies ``` pip install --pre torch torchvision torchaudio vllm xformers flashinfer_python \ --index-url https://download.pytorch.org/whl/nightly/cu129 ``` * Install a specific version ``` pip install --pre torch==2.9.0.dev20250903 torchvision torchaudio \ vllm==1.0.0.dev20250903 \ xformers=0.0.33.dev20250903 \ flashinfer_python=0.2.14.dev20250903 \ --index-url https://download.pytorch.org/whl/nightly/cu129 ``` Pull Request resolved: pytorch#162000 Approved by: https://github.com/atalman
I suspected that I would need to repack vLLM wheels from pytorch#162000 because I renamed the wheel, and it turns out to be true. The error is as follows: ``` $ uv pip install --pre xformers --index-url https://download.pytorch.org/whl/nightly/cu129 Using Python 3.12.11+meta environment at: venv/py3.12 Resolved 28 packages in 759ms error: Failed to install: xformers-0.0.33.dev20250901+cu129-cp39-abi3-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (xformers==0.0.33.dev20250901+cu129) Caused by: Wheel version does not match filename: 0.0.33+5d4b92a5.d20250907 != 0.0.33.dev20250901+cu129 ``` Pull Request resolved: pytorch#162371 Approved by: https://github.com/atalman
This uses the same approach as building triton wheel where we publish a nightly wheel for vLLM whenever its pinned commit is updated. The key change is to use
pytorch/manylinux2_28-builderas the base image to build vLLM, so there are a couple of changes on the vLLM Dockerfile used by lumen_clipytorch/manylinux2_28-builderis RedHat instead of Debian-based, so no apt-get.github/actions/build-external-packages/action.ymlwhereCUDA_VERSIONis not set correctly, preventing CUDA 12.9 build.github/actions/build-external-packages/action.ymlwhereTORCH_WHEELS_PATHis not set correctly and always defaulted todistpytorch-linux-jammy-cuda12.8-cudnn9-py3.12-gcc11-vllm,pytorch/manylinux2_28-builderdoesn't have any torch dependencies preinstalledWe need to prepare 3 wheels for vLLM, xformers, and flashinfer-python. And I rename them in the same convention as PyTorch nightlies
MAJOR.MINOR.PATCH.devYYYYMMDDso that vLLM nightlies will work with torch nightlies on the same date.Usage