-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: fixed_vals #846
feat: fixed_vals #846
Conversation
Hi @cgottard thanks a lot for this PR. It's been something we wanted to add for some time. I wonder whether we could somehow streamline the API further. As it is, we do a lot of passing around for these items
and perhaps we should be rather moving around
what do you think? |
@@ -17,6 +24,7 @@ def hypotest( | |||
pdf (~pyhf.pdf.Model): The HistFactory statistical model | |||
init_pars (Array or Tensor): The initial parameter values to be used for minimization | |||
par_bounds (Array or Tensor): The parameter value bounds to be used for minimization | |||
fixed_vals (list of tuples): Parameters to be held constant and their value |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to be careful, since this will be confusing given the existing poi_test
argument.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can be confusing because fixed_poi_fit
is not strictly necessary anymore. Anyway in fixed_poi_fit
the mu is fixed via the fixed_vals
and if additional parameters are set to constant they're added to the list, see https://github.com/cgottard/pyhf/blob/fixed_vals/src/pyhf/infer/mle.py#L56
So there should be no problem nor ambiguity.
I think we should have this in for 0.6.0 because I don't think trying to keep the same API is a good idea. |
@cgottard would you be up for shepherding a larger change to the API that enables this feature? |
Dear all, thanks for the feedback. I was addressing @matthewfeickert's comment about the test but locally pytest is not running as it should and I see the errors only from the CI. Anyway, I am ok with changing the API as @lukasheinrich suggested. I agree that it makes everything clearer and more transparent. In the meantime I think we can keep this MR open to discuss the progress, then we'll see if we want to close it and create a new one using a new feature branch name. |
I think this PR should go in as close to how it is, but with tests and coverage, as part of 0.5.0 -> 0.6.0 (we don't want very large PRs) and you've pointed out this is a relatively quick change. The API is backwards compatible here with it. Then we change the API in another PR. Having very large PRs in general scares me. |
that sounds good to me.
…On Mon, Apr 27, 2020 at 1:05 PM Giordon Stark ***@***.***> wrote:
Anyway, I am ok with changing the API as @lukasheinrich
<https://github.com/lukasheinrich> suggested. I agree that it makes
everything clearer and more transparent. In the meantime I think we can
keep this MR open to discuss the progress, then we'll see if we want to
close it and create a new one using a new feature branch name.
I think this PR should go in as close to how it is, but with tests and
coverage as part of 0.5.0 -> 0.6.0 (we don't want very large PRs) and
you've pointed out this is a relatively quick change. The API is backwards
compatible here with it. Then we change the API in another PR. Having very
large PRs in general scares me.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#846 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AARV6A3NFH6QCKWE7NH6IVTROVRHPANCNFSM4MRXAMRA>
.
|
Ok, I'll ping you when this is ready. I implemented a new background uncertainty in test_backend_consistency which is then fixed and shifted. |
Dear all, I cloned the repo from master and run locally the Model
Then, I updated my feature branch to master and run the test with the fixed values:
Discrepancies among tensors are gone. Possibly because I am using a conda env correctly configured for cuda and TF. And yes, I am running from my branch as I can verify from the stdout and already the "[none-" in the name of the test. Given the small excess in the tolerance w.r.t 0.01 shall we increase the tolerance to 1.5%? EDIT: actually the tensors are also slightly above the 5 permille tolerance for those two cases: EDIT 2: different results are obtained for the same backend if the fit is run on CPU or GPU GPU - TEST PASSED CI - TEST FAILED Can we open a ticket for this? The backend consistency has little to do with this MR which simply propagated a function argument. |
This pull request introduces 1 alert when merging f610990 into 94b87a8 - view on LGTM.com new alerts:
|
This pull request introduces 1 alert when merging 41d6b7f into cb4d37b - view on LGTM.com new alerts:
|
Codecov Report
@@ Coverage Diff @@
## master #846 +/- ##
==========================================
- Coverage 96.64% 96.28% -0.36%
==========================================
Files 59 56 -3
Lines 3279 3180 -99
Branches 454 438 -16
==========================================
- Hits 3169 3062 -107
- Misses 69 75 +6
- Partials 41 43 +2
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
@cgottard Sorry to have left you hanging here on this. If you have time can you rebase this so that we can try to get this in for Edit: @kratsg mentioned that he already talked with you, so he'll take care of this PR and we'll shepard it in. Thank you in advance for your contribution! |
fixed_vals argument in infer functions
I propagated to the
infer
functions the capability of fixing some parameters to a constant the value. The optimizer already implemented this functionality so it was only a matter of interfacing.I tested the changes running both CLs calculations and MLE fits. The CI succeeds.
Note: before these changes it was possible to perform a MLE fit passing the list of fixed parameters via **kwargs. The same was not true for hypotest. I found that adding an explicit function argument for both cases was appropriate for such a common task.
@kratsg, we discussed the use case of these changed via e-mail. Could you review this MR?
Checklist Before Requesting Reviewer
Before Merging
For the PR Assignees: