-
Notifications
You must be signed in to change notification settings - Fork 482
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add **quantization_kwargs to FrozenNF4Linear
and LoRALinear
and DoRALinear
#1987
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1987
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 1196f8a with merge base ac4f88e (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1987 +/- ##
===========================================
+ Coverage 24.79% 64.96% +40.16%
===========================================
Files 318 318
Lines 17597 17610 +13
===========================================
+ Hits 4364 11440 +7076
+ Misses 13233 6170 -7063 ☔ View full report in Codecov by Sentry. |
FrozenNF4Linear
and LoRALinear
FrozenNF4Linear
and LoRALinear
and DoRALinear
"""Test that passing in non-default quantization kwargs works as expected.""" | ||
quantization_kwargs = { | ||
"block_size": 16, | ||
"scaler_block_size": 256, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sad my spelling mistake is now surfaced lol
…nd ``DoRALinear`` (#1987)
Context
What is the purpose of this PR? Is it to
The original error that sparked this investigation was that I was able to run Llama3.2V QLoRA on 4 GPUs, but it failed on 8 GPUs with the following error:
Weird, right? In digging further @pbontrager figured out that the default block_size and scaler_block_size used when converting the weights to NF4 were the wrong size when the model was sharded to 8 GPUs. After consulting with @drisspg, it seems relatively harmless to modify the scaler_block_size when quantizing these weights. Therefore, the fix here is to expose the quantization_kwargs for both FrozenNF4Linear and LoRALinear. A follow up PR will actually land the changes in the model builders to resolve the initial error.
Changelog
What are the changes made in this PR?
quantization_kwargs
toFrozenNF4Linear
quantization_kwargs
toLoRALinear
quantization_kwargs
toDoRALinear
Test plan
Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.
pre-commit install
)pytest tests
pytest tests -m integration_test
UX
If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example