We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
``Hello,
I am currently encountering an issue during the training process of my model. The error message that I receive is as follows:
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
This error occurs when enumerating over the DataLoader in my training loop:
for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, emo) in enumerate(train_loader):
The batch data seems to be of the correct shape when I print it out just before the loop:
batch[0][0].shape: torch.Size([17]) batch[0][1].shape: torch.Size([513]) batch[0][2].shape: torch.Size([1, 66150]) batch[0][3].shape: torch.Size([1]) batch[0][4].shape: torch.Size([1024])
The problem seems to occur when the collate_fn function of the DataLoader tries to create a LongTensor from the sizes of the batch data:
torch.LongTensor([x[1].size(1) for x in batch])
I have been trying to debug this issue, but I am currently stuck. Any help or pointers would be greatly appreciated.
Regard
The text was updated successfully, but these errors were encountered:
Same question QQ
Sorry, something went wrong.
No branches or pull requests
``Hello,
I am currently encountering an issue during the training process of my model. The error message that I receive is as follows:
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
This error occurs when enumerating over the DataLoader in my training loop:
for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, emo) in enumerate(train_loader):
The batch data seems to be of the correct shape when I print it out just before the loop:
The problem seems to occur when the collate_fn function of the DataLoader tries to create a LongTensor from the sizes of the batch data:
torch.LongTensor([x[1].size(1) for x in batch])
I have been trying to debug this issue, but I am currently stuck. Any help or pointers would be greatly appreciated.
Regard
The text was updated successfully, but these errors were encountered: