Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Include system prompt in Phi3 by default #1778

Merged
merged 1 commit into from
Oct 9, 2024

Conversation

joecummings
Copy link
Contributor

@joecummings joecummings commented Oct 9, 2024

Context

What is the purpose of this PR? Is it to

  • add a new feature
  • fix a bug
  • update tests and/or documentation
  • other (please add here)

Closes #1774. After the discussion on HF, the Microsoft team reversed their decision and included the system prompt in Phi3 by default, but we never updates our preferences. This now sets ignore_system_prompt=False by default - still override able from the tokenize_messages API.

Changelog

What are the changes made in this PR?

  • Update ignore_system_prompt=False in Phi3 tokenizer
  • Add proper tests to ensure the change worked

Test plan

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

UX

If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example

  • I did not change any public API
  • I have added an example to docs or docstrings

Copy link

pytorch-bot bot commented Oct 9, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1778

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit c65f4b9 with merge base 89f21c2 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 9, 2024
@joecummings joecummings merged commit 60864e3 into pytorch:main Oct 9, 2024
17 checks passed
@joecummings joecummings mentioned this pull request Oct 9, 2024
34 tasks
@zjost
Copy link

zjost commented Oct 9, 2024

Thanks for this. When looking at their model card and example training script, I noticed a few other things:

  • add_bos_token and add_eos_token are both False
  • pad_token is overwritten to unk_token to "prevent endless generation"
  • Their chat template has an add_generation_prompt option
    • I'm not sure whether we can just use inference: bool to turn this on or off because I'm not really confident of when it should be on.

I came up with the following implementation, but I'm sure you can improve upon it.

class NewTokenizer(Phi3MiniTokenizer):

    def __init__(
        self,
        path: str,
        special_tokens: Optional[Dict[str, int]] = None,
        max_seq_len: Optional[int] = None,
        prompt_template: Optional[PromptTemplate] = None,
    ):
        self._spm_model = SentencePieceBaseTokenizer(path)

        self.special_tokens = (
            special_tokens if special_tokens is not None else PHI3_SPECIAL_TOKENS
        )

        # Use custom EOS and pad ids instead of SentencePiece's
        self.eos_id = self.special_tokens["<|endoftext|>"]
        
        # See: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/sample_finetune.py#L141
        # self.pad_id = self.special_tokens["<|endoftext|>"]
        self.pad_id = self._spm_model.spm_model.unk_id()

        # During generation, stop when eos_id is encountered
        self.stop_tokens = [self.eos_id]

        self.max_seq_len = max_seq_len

        self.prompt_template = prompt_template
    def tokenize_messages(
        self,
        messages: List[Message],
        *,
        add_generation_prompt: bool = False,
    ) -> Tuple[List[int], List[bool]]:

        templated_messages = (
            self.prompt_template(messages)
            if self.prompt_template is not None
            else messages
        )

        tokenized_messages = []
        mask = []

        # The chat template in HF adds a bunch of newlines
        new_line_token_id = self.encode("\n", add_bos=False, add_eos=False, trim_leading_whitespace=True)

        for message in templated_messages:

            # Add special tokens
            if message.role == "user":
                tokenized_messages.append(self.special_tokens["<|user|>"])
                mask.append(message.masked)
            elif message.role == "assistant":
                tokenized_messages.append(self.special_tokens["<|assistant|>"])
                mask.append(message.masked)
            elif message.role == "system":
                tokenized_messages.append(self.special_tokens["<|system|>"])
                mask.append(message.masked)
            else:
                raise ValueError(
                    f"Unknown role '{message.role}' for message: '{message.content}'"
                )

            # Add new line token
            tokenized_messages.extend(new_line_token_id)
            mask.extend([message.masked] * len(new_line_token_id))

            # Tokenize current message, append with masks
            tokens = []
            for item in message.content:
                if item["type"] == "text":
                    tokens = tokens + self.encode(
                        item["content"].rstrip(" "),
                        add_bos=False,
                        add_eos=False,
                        trim_leading_whitespace=True,  # Always trim whitespace (just to match HF tokenizer implementation)
                    )
                else:
                    raise RuntimeError(
                        f"Unsupported message content type: {item['type']}"
                    )

            tokens = tokens + [self.special_tokens["<|end|>"]] + new_line_token_id
            tokenized_messages.extend(tokens)
            mask.extend([message.masked] * len(tokens))

            # Break out early if we reach max_seq_len
            if self.max_seq_len and len(tokenized_messages) >= self.max_seq_len:
                break

        if add_generation_prompt:
            tokenized_messages.append(self.special_tokens["<|assistant|>"])
            mask.append(message.masked)
            tokenized_messages.extend(new_line_token_id)
            mask.extend([message.masked] * len(new_line_token_id))
        else:
            tokenized_messages.append(self.eos_id)
            mask.append(message.masked)

        # Finally, truncate if necessary
        if self.max_seq_len and len(tokenized_messages) >= self.max_seq_len:
            tokenized_messages = truncate(
                tokenized_messages, self.max_seq_len, self.eos_id
            )
            mask = truncate(mask, self.max_seq_len, message.masked)

        return tokenized_messages, mask
    
    def __call__(
        self, sample: Mapping[str, Any], inference: bool = False
    ) -> Mapping[str, Any]:

        messages = sample.pop("messages")
        tokens, mask = self.tokenize_messages(messages, add_generation_prompt=inference)
        sample["tokens"] = tokens
        sample["mask"] = mask
        return sample

@zjost
Copy link

zjost commented Oct 9, 2024

Oh, another thing. It looks like the following line adds a bunch of whitespace characters due to SentencePiece prepending with it, so each newline looks like _\n:

new_line_token_id = self.encode("\n", add_bos=False, add_eos=False)

Seems an easy change is:

new_line_token_id = self.encode("\n", add_bos=False, add_eos=False, trim_leading_whitespace=True)

mori360 pushed a commit to mori360/torchtune that referenced this pull request Oct 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Why does Phi3 always ignore the prompt?
4 participants