-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[TRTLLM-5826][feat] Support pytorch LoRA adapter eviction #5616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TRTLLM-5826][feat] Support pytorch LoRA adapter eviction #5616
Conversation
c86896a to
709ff70
Compare
|
/bot run |
|
PR_Github #10850 [ run ] triggered by Bot |
|
PR_Github #10850 [ run ] completed with state |
|
/bot run --gpu-type A100X --disable-multi-gpu-test --post-merge |
1 similar comment
|
/bot run --gpu-type A100X --disable-multi-gpu-test --post-merge |
|
PR_Github #11046 [ run ] triggered by Bot |
|
PR_Github #11046 [ run ] completed with state |
495f319 to
523efb7
Compare
|
/bot run |
|
PR_Github #11059 [ run ] triggered by Bot |
|
PR_Github #11059 [ run ] completed with state |
beeb0a3 to
45fb302
Compare
|
/bot run |
|
PR_Github #11117 [ run ] triggered by Bot |
|
/bot run --post-merge |
|
PR_Github #11148 [ run ] triggered by Bot |
|
PR_Github #11117 [ run ] completed with state |
|
PR_Github #11148 [ run ] completed with state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AutoDeploy change LGTM
|
/bot run |
Description
Changes:
BindCapacitySchedulerto passpeft_cache_managerto the CPP binding.1.1. Fixed
BindCapacitySchedulerconstructions accordingly.PeftCacheManager.free_resourcesto callmark_request_doneLoraManager.3.1. Removed support for this optimization for non-torch flow.
3.2. Added
LoraManager.is_adapter_in_cpu_cachemethod.3.3. Added optional
cpp_peft_cache_managerargument toLoraManagerconstructor, used by its newly addedis_adapter_in_cpu_cachemethod.3.4. Changed
GenerationExecutorWorkerin torch flow only, to get the CPP peft cache manager and pass it toLoraManagerconstructor.test_llm.pyand totest_llm_pytorch.py.test_llm.pyandtest_llm_pytorch.pyto a separate file, saving duplication.6.1. For pytorch flow - Changed
PeftCacheManager::determineNumPagesto throw aPeftTaskNotCachedExceptionwith a detailed "not supported note" when the request has no lora weights, no lora config, and its lora adapter is not found in cache.6.2.
For TRT flow - Appended a detailed "not supported note" to the error message thrown inREVERTED, as the optimization was disabled for non-torch flow.PeftCacheManager::addRequestPeftwhen the request has no lora weights or no lora config and not lora adapter not found in cache.Test Coverage
test_llm.py::test_llama_7b_multi_lora_evict_load_new_adapterstest_llm_pytorch.py::test_llama_7b_multi_lora_evict_load_new_adapterstest_llm_pytorch.py::test_llama_7b_multi_lora_load_previously_cpu_cache_evicted_adapter_failsGitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]Launch build/test pipelines. All previously running jobs will be killed.
--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-[Post-Merge]-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.md.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Documentation