You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/datamodule.py:424: LightningDeprecationWarning: DataModule.setup has already been called, so it will not be called again. In v1.6 this behavior will change to always call DataModule.setup.
f"DataModule.{name} has already been called, so it will not be called again. "
It's getting called once in our train runner so that we have access to the indices of train/test/val, but then it gets called again during PyTorch Lightning fit
Pyro training plan -- it's unhappy that loss is a float and not a torch Tensor.
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/data_loading.py:323: UserWarning: The number of training samples (36) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch.
f"The number of training samples ({self.num_training_batches}) is smaller than the logging interval"
The text was updated successfully, but these errors were encountered:
It's getting called once in our train runner so that we have access to the indices of train/test/val, but then it gets called again during PyTorch Lightning fit
Pyro training plan -- it's unhappy that loss is a float and not a torch Tensor.
Adversarial training plan has this new issue
Lightning-AI/pytorch-lightning#8603
log_every_n_steps
The text was updated successfully, but these errors were encountered: