Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update validation, remove slicing logic from classes #1660

Merged
merged 4 commits into from
Sep 24, 2024

Conversation

joecummings
Copy link
Contributor

Context

What is the purpose of this PR? Is it to

  • add a new feature
  • fix a bug
  • update tests and/or documentation
  • other (please add here)

Please link to any issues this PR addresses.

Changelog

What are the changes made in this PR?
*

Test plan

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

UX

If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example

  • I did not change any public API
  • I have added an example to docs or docstrings

Copy link

pytorch-bot bot commented Sep 23, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1660

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 3dfd1dd with merge base bf93806 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 23, 2024
Copy link
Contributor

@felipemello1 felipemello1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nits. I worry about the logic for checking if cache is enabled. My intuition is that its not robust enough.

I would like to see a test confirming that these errors are properly raised. Do you think its possible?

return self.decoder_max_cache_seq_len is not None
def caches_are_enabled(self) -> bool:
"""Check if the key value caches are setup."""
return self.layers[0].cache_enabled
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that in the old logic we checked encoder/decoder. It seems that this is not necessary, and we can always check layer 0

is it enough? It makes sense to me, just double checking

layers which have cache enabled."""
return self.decoder_max_cache_seq_len is not None
def caches_are_enabled(self) -> bool:
"""Check if the key value caches are setup."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: fine as it is, but maybe add context about how/when this is used. Something like: "useful during inference to xyz". Feel free to ignore

"KV-caches for cross-attention/fusion layers are setup for inference mode, causal masks must be provided!"
" Use the `encoder_mask` arg to provide a causal mask."
"KV-caches for cross-attention/fusion layers are setup for inference mode and you seem to be using"
" encoder_input, causal masks must be provided! Use the `encoder_mask` arg to provide a causal mask."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: prob a period instead of comma would be better

if mask is None:
raise ValueError(
"KV-caches for self-attention layers are setup for inference mode, causal masks must be provided!"
" Use the `mask` arg to provide a causal mask."
)

if self.encoder_caches_are_enabled():
if encoder_mask is None:
if encoder_input is None and encoder_mask:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in there some good default for encoder_mask, like causal mask? If not, just mark as resolved

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I plan on removing / re-working most of this validation logic to have better defaults, but that will be a follow-up PR.

Copy link
Contributor

@felipemello1 felipemello1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stamping to unblock, but IMO adding a small test before merging would be much better

@joecummings joecummings merged commit b4fea32 into pytorch:main Sep 24, 2024
17 checks passed
@joecummings joecummings deleted the remove-fusion-valdiation branch September 24, 2024 02:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants