-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Text embedding code failing for single prompt #85
Comments
Hi, are you inputting the string to the function? In that function, we expect a list. So you could make that string as a text (length 1). Please let us know how you do it. |
I'm inputting an array of length 1.
with args.batch_size is set to 1, this code fails. It works with larger batch sizes. |
I can confirm this. I am facing the same issue. Thanks |
#105 should fix this. Are there plans to merge it? |
need more than 2 text to get_text_embedding |
already made it a list (all_text_list[0:1]) but not work at all |
When I try to use clap_model.get_text_embedding() on an array with a single prompt in it, the call fails with an error in the Roberta tokenizer. It seems that it's confused about the shape of the array unless there's more than one element in it.
File ".../transformers/models/roberta/modeling_roberta.py", line 802, in forward
batch_size, seq_length = input_shape
ValueError: not enough values to unpack (expected 2, got 1)
The text was updated successfully, but these errors were encountered: