-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add predict_epoch_end
hook.
#9380
Comments
There are a number of issues with prediction right now that are at least blocking FB's usage of
But this unconditionally accesses the attribute: https://github.com/PyTorchLightning/pytorch-lightning/blob/8407238d66df14c4476f880a6e6260b4bfa83b40/pytorch_lightning/loops/epoch/prediction_epoch_loop.py#L163-L164 a quickfix could be to check if the dataloader has a batch sampler here: https://github.com/PyTorchLightning/pytorch-lightning/blob/8407238d66df14c4476f880a6e6260b4bfa83b40/pytorch_lightning/loops/epoch/prediction_epoch_loop.py#L163-L164
cc @tchaton |
accumulating predictions is pretty much just some boilerplate code in usual cases, and if lightning can provide it on the fly, then I think |
+1 for deprecating |
|
So what is the status of this feature? |
+1 on |
I would also benefit from this feature! |
I would also benefit from this feature, curious if there is any update on plans here. Or I'm curious if there are any alternatives/ best practices that people have adopted that I could learn from. My use case is the same as in #9379. I could imagine using Thanks!! |
🚀 Feature
Motivation
Motivation: #9379
Also, I remember it's a TODO somewhere.
Pitch
The hook will be similar to
{val/test}_epoch_end
but it will return the outputs.Also, should we update the signature of
on_predict_epoch_end
to not accept the outputs, since they don't actually return anything so even if someone wants to modify the predictions, it won't do have any effect on the original predictions.Alternatives
Can't think of any.
Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
The text was updated successfully, but these errors were encountered: