-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PSRoiAlign: SymInt support + meta-implem #8058
Conversation
test/optests_failures_dict.json
Outdated
"TestPSRoIAlign.test_autograd_registration__test_backward[True-mps-0]": { | ||
"comment": "NotImplementedError: autograd_registration_check: NYI devices other than CPU/CUDA, got {'mps'}", | ||
"status": "xfail" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CC @zou3519
I just opened pytorch/pytorch#111797 which I believe could be a fix for the problem I'm facing here:
This test (and a bunch of others)
Lines 186 to 189 in 3fb88b3
@pytest.mark.parametrize("seed", range(10)) | |
@pytest.mark.parametrize("device", cpu_and_cuda_and_mps()) | |
@pytest.mark.parametrize("contiguous", (True, False)) | |
def test_backward(self, seed, device, contiguous, deterministic=False): |
is parametrized over cpu, CUDA and MPS, and fails on MPS.
I'm trying to xfail
the MPS parametrization as above (lines 10-12), but optests
complains with:
E RuntimeError: In failures dict, got test name 'TestPSRoIAlign.test_autograd_registration__test_backward[True-mps-0]'. We parsed this as running test 'test_autograd_registration' on 'test_backward[True-mps-0]', but test_backward[True-mps-0] does not exist on the TestCase 'TestPSRoIAlign]. Maybe you need to change the test name?
The problem is, if I replace TestPSRoIAlign.test_autograd_registration__test_backward[True-mps-0]
with TestPSRoIAlign.test_autograd_registration__test_backward
in line 10, then I'm getting and "unexpected success":
E torch.testing._internal.optests.generate_tests.OpCheckError: generate_opcheck_tests: Unexpected success for operator torchvision::ps_roi_align on test TestPSRoIAlign.test_autograd_registration__test_backward. This may mean that you have fixed this test failure. Please rerun the test with PYTORCH_OPCHECK_ACCEPT=1 to automatically update the test runner or manually remove the expected failure in the failure dict at /home/nicolashug/dev/vision/test/optests_failures_dict.jsonFor more details, see https://docs.google.com/document/d/1Pj5HRZvdOq3xpFpbEjUZp2hBovhy7Wnxw14m6lF2154/edit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NicolasHug can the tests be run under unittest, or are they pytest only?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's pytest only
The MPS checks are going to fail until pytorch/pytorch#112251 is resovled on torch core's side. I don't know how long it will take, so I'm going to remove the opchecks from this PR (and from the other ones) so that we can still have symint / torch.compile support on these ops (albeit temporarily untested) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stamp
Reviewed By: vmoens Differential Revision: D50789103 fbshipit-source-id: 5da312242bdd9f01b9ee4d45f791ee564add9f68
Towards #8055
cc @pmeier