-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: Make hypotest return CLs as 0-d tensor #944
Conversation
Still need to correct PyTorch's behavior of returning a Tensor. |
this will likely require a major version bump |
you want this to be v1.0.0? or v0.5.0? |
meh as long as we're still 0.X we can do minor bumps I guess |
As part of this we probably need to fix this as well import pyhf
import numpy as np
import jax
import torch
import tensorflow as tf
print("numpy")
print(np.asarray(0.1))
pyhf.set_backend("numpy")
print(pyhf.tensorlib.astensor(0.1))
print("\njax")
print(jax.numpy.asarray(0.1))
pyhf.set_backend("jax")
print(pyhf.tensorlib.astensor(0.1))
print("\ntorch")
print(torch.tensor(0.1))
pyhf.set_backend("pytorch")
print(pyhf.tensorlib.astensor(0.1))
print("\ntensorflow")
print(tf.constant(0.1))
pyhf.set_backend("tensorflow")
print(pyhf.tensorlib.astensor(0.1))
|
Ah, I guess so. I think we did this in the past to intentionally enforce uniformity across all backends, but I guess they're all the same now and so enforcing shape isn't helping. import pyhf
import numpy as np
import torch
print("numpy")
example = np.asarray(0.1)
print(f"example {example} is a {type(example)} with shape {example.shape}")
pyhf.set_backend("numpy")
example = pyhf.tensorlib.astensor(0.1)
print(f"example {example} is a {type(example)} with shape {example.shape}")
print("\ntorch")
example = torch.tensor(0.1)
print(f"example {example} is a {type(example)} with shape {example.shape}")
pyhf.set_backend("pytorch")
example = pyhf.tensorlib.astensor(0.1)
print(f"example {example} is a {type(example)} with shape {example.shape}")
So we should probably just fix this first in a separate PR. |
23d45eb
to
eb55b8c
Compare
eb55b8c
to
1dd3679
Compare
@lukasheinrich @kratsg Related to Issue #974 and my last question in Issue #714, do we want this to return a |
Codecov Report
@@ Coverage Diff @@
## master #944 +/- ##
==========================================
- Coverage 96.70% 96.70% -0.01%
==========================================
Files 59 59
Lines 3338 3337 -1
Branches 467 468 +1
==========================================
- Hits 3228 3227 -1
Misses 69 69
Partials 41 41
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
The current state of the PR results in the following import pyhf
model = pyhf.simplemodels.hepdata_like(
signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]
)
data = [51, 48] + model.config.auxdata
test_mu = 1.0
for backend in ["numpy", "jax", "tensorflow", "pytorch"]:
pyhf.set_backend(backend)
CLs_obs, CLs_exp = pyhf.infer.hypotest(
test_mu, data, model, qtilde=True, return_expected=True
)
print(
f"Observed: {CLs_obs} is of type {type(CLs_obs)}, Expected: {CLs_exp} is of type {type(CLs_exp)}"
) giving
This change in return type should probably result in a release, even though the public API tests still pass. |
@phinate @WolfgangWaltenberger can you give feedback on if this will be problematic? |
@matthewfeickert it’s gonna break neos temporarily, but it’s the smallest fix, so I say just go for it. I’m in the middle of a refactor, so I can add this as part of it. |
If with "this" you refer to the change of return types, then no, thats not a problem. |
Thanks for the prompt feedback @phinate and @WolfgangWaltenberger. We just wanted to make sure that having the CLs values be 0-d tensors (shape |
So still a ndarray "tensor" but a scalar tensor. That hurts to write.
Return list of tensors instead of tensor of list
0db38bc
to
3020c69
Compare
@alexander-held Will this affect |
Hypothesis tests are not yet interfaced, so no impact on |
Description
Resolves #714
Results in the following behavior for the CLs values
previous behavior was
Checklist Before Requesting Reviewer
Before Merging
For the PR Assignees: