Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytorch lightning 1.4 issues #1102

Closed
adamgayoso opened this issue Jul 29, 2021 · 1 comment · Fixed by #1103
Closed

Pytorch lightning 1.4 issues #1102

adamgayoso opened this issue Jul 29, 2021 · 1 comment · Fixed by #1103
Assignees
Labels
Milestone

Comments

@adamgayoso
Copy link
Member

adamgayoso commented Jul 29, 2021

  1. Data splitting
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/datamodule.py:424: LightningDeprecationWarning: DataModule.setup has already been called, so it will not be called again. In v1.6 this behavior will change to always call DataModule.setup.
  f"DataModule.{name} has already been called, so it will not be called again. "

It's getting called once in our train runner so that we have access to the indices of train/test/val, but then it gets called again during PyTorch Lightning fit

  1. Pyro training plan -- it's unhappy that loss is a float and not a torch Tensor.

  2. Adversarial training plan has this new issue

Lightning-AI/pytorch-lightning#8603

  1. Now we need to set log_every_n_steps
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/data_loading.py:323: UserWarning: The number of training samples (36) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch.
  f"The number of training samples ({self.num_training_batches}) is smaller than the logging interval"
@adamgayoso adamgayoso added the bug label Jul 29, 2021
@adamgayoso adamgayoso changed the title Calling setup twice in data splitter Pytorch lightning 1.4 issues Jul 29, 2021
@adamgayoso
Copy link
Member Author

adamgayoso commented Aug 10, 2021

Let's also check that #1112 has been fixed. It could also be related to the log_every_n_steps?

@adamgayoso adamgayoso added this to the 0.14.0 milestone Aug 30, 2021
@adamgayoso adamgayoso self-assigned this Aug 30, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant