Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: Optimizer API harmonization, minuit autodifferentiation support #951

Merged
merged 102 commits into from
Jul 23, 2020

Conversation

kratsg
Copy link
Contributor

@kratsg kratsg commented Jul 16, 2020

Description

As an example of behavior change:

>>> import pyhf
>>> pyhf.get_backend()
(<pyhf.tensor.numpy_backend.numpy_backend object at 0x7f33e6692dc0>, <pyhf.optimize.scipy_optimizer object at 0x7f33e668d460>)
>>> pyhf.set_backend("numpy", "minuit")
>>> pyhf.get_backend()
(<pyhf.tensor.numpy_backend.numpy_backend object at 0x7f339d705c80>, <pyhf.optimize.minuit_optimizer object at 0x7f339d706410>)
>>> # NumPy doesn't support autodiff
>>> pyhf.tensorlib.default_do_grad
False
>>> model = pyhf.simplemodels.hepdata_like([50.0], [100.0], [10])
>>> data = [125.0] + model.config.auxdata
>>> pyhf.infer.mle.fit(data, model, do_grad=False)
array([0.50004936, 1.00000438])
>>> # This will fail given that pyhf.tensorlib.default_do_grad == False for NumPy
>>> pyhf.infer.mle.fit(data, model, do_grad=True)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/feickert/Code/GitHub/pyhf/src/pyhf/infer/mle.py", line 61, in fit
    return opt.minimize(twice_nll, data, pdf, init_pars, par_bounds, **kwargs)
  File "/home/feickert/Code/GitHub/pyhf/src/pyhf/optimize/mixins.py", line 133, in minimize
    do_stitch=do_stitch,
  File "/home/feickert/Code/GitHub/pyhf/src/pyhf/optimize/common.py", line 119, in shim
    'do_stitch': do_stitch,
  File "/home/feickert/Code/GitHub/pyhf/src/pyhf/optimize/opt_numpy.py", line 25, in wrap_objective
    raise exceptions.Unsupported("Numpy does not support autodifferentiation.")
pyhf.exceptions.Unsupported: Numpy does not support autodifferentiation.
>>> pyhf.set_backend("pytorch", "minuit")
>>> pyhf.tensorlib.default_do_grad
True
>>> pyhf.infer.mle.fit(data, model) # gradient is passed in by default
tensor([0.5018, 0.9998])
>>> pyhf.infer.mle.fit(data, model, do_grad=True)
tensor([0.5018, 0.9998])
>>> pyhf.infer.mle.fit(data, model, do_grad=False)
tensor([0.9685, 0.9171]) # bad result, but more due to 32b precision
>>> pyhf.set_backend(pyhf.tensor.pytorch_backend(precision="64b"))
>>> pyhf.get_backend()
(<pyhf.tensor.pytorch_backend.pytorch_backend object at 0x7fbe659b9f00>, <pyhf.optimize.scipy_optimizer object at 0x7fbe65951c30>)
>>> pyhf.tensorlib.precision
'64b'
>>> pyhf.infer.mle.fit(data, model)
tensor([0.5000, 1.0000])
>>> pyhf.infer.mle.fit(data, model, do_grad=False)
tensor([0.5000, 1.0000])
>>> pyhf.set_backend("numpy", "scipy")
>>> pyhf.infer.mle.fit(data, model)
array([0.49998815, 0.9999697 ])

Docs are built here: https://pyhf.readthedocs.io/en/feat-autogradminuit/

Checklist Before Requesting Reviewer

  • Tests are passing
  • "WIP" removed from the title of the pull request
  • Selected an Assignee for the PR to be responsible for the log summary

Before Merging

For the PR Assignees:

  • Summarize commit messages into a comprehensive review of the PR
* Rewrite/reorganize the optimization functionality
* Individual tensor backend optimizers are dropped in favor of shims passed to the optimizer object
* Change the API for optimizer to align more closely with how tensor backends are set/used
* Allow for configurability of enabling stitching values, using gradients
* Use the from_array_func initialization of Minuit
* Add intersphinx_mapping between projects for documentation

@kratsg kratsg added the feat/enhancement New feature or request label Jul 16, 2020
@lukasheinrich lukasheinrich marked this pull request as ready for review July 16, 2020 20:35
@codecov
Copy link

codecov bot commented Jul 16, 2020

Codecov Report

Merging #951 into master will increase coverage by 0.10%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #951      +/-   ##
==========================================
+ Coverage   96.54%   96.65%   +0.10%     
==========================================
  Files          57       59       +2     
  Lines        3215     3284      +69     
  Branches      440      447       +7     
==========================================
+ Hits         3104     3174      +70     
+ Misses         71       69       -2     
- Partials       40       41       +1     
Flag Coverage Δ
#unittests 96.65% <100.00%> (+0.10%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
src/pyhf/__init__.py 98.24% <100.00%> (+0.32%) ⬆️
src/pyhf/cli/infer.py 100.00% <100.00%> (ø)
src/pyhf/exceptions/__init__.py 100.00% <100.00%> (ø)
src/pyhf/optimize/__init__.py 95.45% <100.00%> (-0.11%) ⬇️
src/pyhf/optimize/common.py 100.00% <100.00%> (ø)
src/pyhf/optimize/mixins.py 100.00% <100.00%> (ø)
src/pyhf/optimize/opt_jax.py 100.00% <100.00%> (ø)
src/pyhf/optimize/opt_minuit.py 100.00% <100.00%> (ø)
src/pyhf/optimize/opt_numpy.py 100.00% <100.00%> (ø)
src/pyhf/optimize/opt_pytorch.py 100.00% <100.00%> (ø)
... and 9 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d35f4b4...30f889c. Read the comment docs.

@kratsg kratsg self-assigned this Jul 17, 2020
@kratsg kratsg added API Changes the public API refactor A code change that neither fixes a bug nor adds a feature labels Jul 18, 2020
@kratsg kratsg changed the title feat: minuit autograd refactor: Optimizer API harmonization, minuit autodifferentiation support Jul 18, 2020
@kratsg kratsg force-pushed the feat/autogradMinuit branch 2 times, most recently from b0d68be to 227494b Compare July 19, 2020 20:21
README.rst Outdated Show resolved Hide resolved
Copy link
Member

@matthewfeickert matthewfeickert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not finished with the review yet, but these are things we should patch up. I can help on this.

src/pyhf/optimize/mixins.py Outdated Show resolved Hide resolved
src/pyhf/optimize/mixins.py Outdated Show resolved Hide resolved
src/pyhf/optimize/opt_jax.py Show resolved Hide resolved
src/pyhf/optimize/opt_minuit.py Show resolved Hide resolved
src/pyhf/optimize/opt_numpy.py Show resolved Hide resolved
src/pyhf/optimize/opt_scipy.py Show resolved Hide resolved
src/pyhf/optimize/opt_tflow.py Show resolved Hide resolved
src/pyhf/__init__.py Outdated Show resolved Hide resolved
src/pyhf/optimize/common.py Outdated Show resolved Hide resolved
src/pyhf/optimize/mixins.py Outdated Show resolved Hide resolved
Copy link
Member

@matthewfeickert matthewfeickert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I think @kratsg has heroically fought with this PR for long enough and we're all happy enough with it as is. I'll approve this and as a reminder to us all we are intentionally breaking the current release tests on master so that we can get PR #972 in as well before cutting a minor v0.5.0 and moving onto toys in v0.6.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API Changes the public API docs Documentation related feat/enhancement New feature or request refactor A code change that neither fixes a bug nor adds a feature
Projects
None yet
3 participants