Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Ensure tensorlib precision used throughout TensorFlow backend #1399

Merged
merged 3 commits into from
Apr 6, 2021

Conversation

matthewfeickert
Copy link
Member

@matthewfeickert matthewfeickert commented Apr 6, 2021

Description

While investigating Issue #1398, it was noticed that for 64b precision for the TensorFlow backed the precision was not properly being used throughout. tf.float32 was used as a default instead of the result of self.dtypemap["float"] and if TensorFlow Probability is given floats as parameters it will default to using 32b precision, and so requires that a tensor of the desired precision be given.

Checklist Before Requesting Reviewer

  • Tests are passing
  • "WIP" removed from the title of the pull request
  • Selected an Assignee for the PR to be responsible for the log summary

Before Merging

For the PR Assignees:

  • Summarize commit messages into a comprehensive review of the PR
* Ensure that the precision requested in the set_backend API is used throughout the TensorFlow backend calculations
   - Determine dtype from dtypemap
   - Ensure TensorFlow Probability operations are given a tensor of the correct precision

@matthewfeickert matthewfeickert added the fix A bug fix label Apr 6, 2021
@matthewfeickert matthewfeickert self-assigned this Apr 6, 2021
@codecov
Copy link

codecov bot commented Apr 6, 2021

Codecov Report

Merging #1399 (a06e20e) into master (cbf8d03) will increase coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #1399   +/-   ##
=======================================
  Coverage   97.53%   97.53%           
=======================================
  Files          63       63           
  Lines        3808     3817    +9     
  Branches      538      538           
=======================================
+ Hits         3714     3723    +9     
  Misses         55       55           
  Partials       39       39           
Flag Coverage Δ
contrib 24.12% <0.00%> (-0.06%) ⬇️
unittests 97.53% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
src/pyhf/tensor/tensorflow_backend.py 97.22% <100.00%> (+0.18%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cbf8d03...a06e20e. Read the comment docs.

Copy link
Contributor

@kratsg kratsg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should be adding so many self.astensor calls, should we?

@matthewfeickert
Copy link
Member Author

I don't think we should be adding so many self.astensor calls, should we?

What do you propose instead? We had to do something similar in PR #1351, and at the moment on master this is a problem as setting the precision to 64b doesn't propagate to TensorFlow Probability operations if they're given floats.

>>> import pyhf
>>> pyhf.set_backend("tensorflow", precision="64b")
>>> pyhf.tensorlib.precision
'64b'
>>> pyhf.tensorlib.normal_logpdf(0.5, 0., 1.)
<tf.Tensor: shape=(), dtype=float32, numpy=-1.0439385>
>>> pyhf.tensorlib.normal(0.5, 0., 1.)
<tf.Tensor: shape=(), dtype=float32, numpy=0.35206532>

@matthewfeickert
Copy link
Member Author

We also made Issue #1352 based off of these comments on PR #1351.

Copy link
Contributor

@kratsg kratsg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this is the cleanest way forward for now. We need to check that the fit times for tflow don't change.

@matthewfeickert
Copy link
Member Author

We need to check that the fit times for tflow don't change.

Yeah, from looking at the CI times there isn't any obvious uptick, but we should note this in the Issue as well.

@matthewfeickert matthewfeickert merged commit 34bd7f4 into master Apr 6, 2021
@matthewfeickert matthewfeickert deleted the fix/cast-tensors-correctly-tf branch April 6, 2021 15:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fix A bug fix
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants