-
Notifications
You must be signed in to change notification settings - Fork 482
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Include system prompt in Phi3 by default #1778
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1778
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit c65f4b9 with merge base 89f21c2 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Thanks for this. When looking at their model card and example training script, I noticed a few other things:
I came up with the following implementation, but I'm sure you can improve upon it. class NewTokenizer(Phi3MiniTokenizer):
def __init__(
self,
path: str,
special_tokens: Optional[Dict[str, int]] = None,
max_seq_len: Optional[int] = None,
prompt_template: Optional[PromptTemplate] = None,
):
self._spm_model = SentencePieceBaseTokenizer(path)
self.special_tokens = (
special_tokens if special_tokens is not None else PHI3_SPECIAL_TOKENS
)
# Use custom EOS and pad ids instead of SentencePiece's
self.eos_id = self.special_tokens["<|endoftext|>"]
# See: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/sample_finetune.py#L141
# self.pad_id = self.special_tokens["<|endoftext|>"]
self.pad_id = self._spm_model.spm_model.unk_id()
# During generation, stop when eos_id is encountered
self.stop_tokens = [self.eos_id]
self.max_seq_len = max_seq_len
self.prompt_template = prompt_template
def tokenize_messages(
self,
messages: List[Message],
*,
add_generation_prompt: bool = False,
) -> Tuple[List[int], List[bool]]:
templated_messages = (
self.prompt_template(messages)
if self.prompt_template is not None
else messages
)
tokenized_messages = []
mask = []
# The chat template in HF adds a bunch of newlines
new_line_token_id = self.encode("\n", add_bos=False, add_eos=False, trim_leading_whitespace=True)
for message in templated_messages:
# Add special tokens
if message.role == "user":
tokenized_messages.append(self.special_tokens["<|user|>"])
mask.append(message.masked)
elif message.role == "assistant":
tokenized_messages.append(self.special_tokens["<|assistant|>"])
mask.append(message.masked)
elif message.role == "system":
tokenized_messages.append(self.special_tokens["<|system|>"])
mask.append(message.masked)
else:
raise ValueError(
f"Unknown role '{message.role}' for message: '{message.content}'"
)
# Add new line token
tokenized_messages.extend(new_line_token_id)
mask.extend([message.masked] * len(new_line_token_id))
# Tokenize current message, append with masks
tokens = []
for item in message.content:
if item["type"] == "text":
tokens = tokens + self.encode(
item["content"].rstrip(" "),
add_bos=False,
add_eos=False,
trim_leading_whitespace=True, # Always trim whitespace (just to match HF tokenizer implementation)
)
else:
raise RuntimeError(
f"Unsupported message content type: {item['type']}"
)
tokens = tokens + [self.special_tokens["<|end|>"]] + new_line_token_id
tokenized_messages.extend(tokens)
mask.extend([message.masked] * len(tokens))
# Break out early if we reach max_seq_len
if self.max_seq_len and len(tokenized_messages) >= self.max_seq_len:
break
if add_generation_prompt:
tokenized_messages.append(self.special_tokens["<|assistant|>"])
mask.append(message.masked)
tokenized_messages.extend(new_line_token_id)
mask.extend([message.masked] * len(new_line_token_id))
else:
tokenized_messages.append(self.eos_id)
mask.append(message.masked)
# Finally, truncate if necessary
if self.max_seq_len and len(tokenized_messages) >= self.max_seq_len:
tokenized_messages = truncate(
tokenized_messages, self.max_seq_len, self.eos_id
)
mask = truncate(mask, self.max_seq_len, message.masked)
return tokenized_messages, mask
def __call__(
self, sample: Mapping[str, Any], inference: bool = False
) -> Mapping[str, Any]:
messages = sample.pop("messages")
tokens, mask = self.tokenize_messages(messages, add_generation_prompt=inference)
sample["tokens"] = tokens
sample["mask"] = mask
return sample |
Oh, another thing. It looks like the following line adds a bunch of whitespace characters due to SentencePiece prepending with it, so each newline looks like new_line_token_id = self.encode("\n", add_bos=False, add_eos=False) Seems an easy change is: new_line_token_id = self.encode("\n", add_bos=False, add_eos=False, trim_leading_whitespace=True) |
Context
What is the purpose of this PR? Is it to
Closes #1774. After the discussion on HF, the Microsoft team reversed their decision and included the system prompt in Phi3 by default, but we never updates our preferences. This now sets
ignore_system_prompt=False
by default - still override able from thetokenize_messages
API.Changelog
What are the changes made in this PR?
ignore_system_prompt=False
in Phi3 tokenizerTest plan
Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.
pre-commit install
)pytest tests
pytest tests -m integration_test
UX
If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example