-
Notifications
You must be signed in to change notification settings - Fork 6.8k
float32 -> float16 cast consistency across implementations #13857
Conversation
@mxnet-label-bot add [pr-awaiting-review] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice addition!
for numerator in range(0, denominator): | ||
for y in [-1.0, 0.0, 1.0]: | ||
small_delta = y / 2**fp32_fraction_bits | ||
val = (-1.0)**sign_bit * 2.0**exponent * (1.0 + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Could we break (1.0 + also in the next line for readability?
# Test requires all platforms to round float32->float16 with same round-to-nearest-even policy. | ||
@with_seed() | ||
def test_cast_float32_to_float16(): | ||
fp16_fraction_bits = 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shall we capitalize constants as per PEP8?
sym_output = exe.outputs[0].asnumpy() | ||
for fp32_val, model_fp16_val, np_fp16_val in zip(input_np, sym_output, expected_output): | ||
if model_fp16_val != np_fp16_val: | ||
raise RuntimeError('fp32->fp16 cast mismatches seen, e.g. with val {}, model_fp16 = {},' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better raise assertionerror or use https://nose.readthedocs.io/en/latest/testing_tools.html as RuntimeError has a different semantics
@larroy - Can you please take a look back at this PR? Your comments are addressed. |
Now that a dependent mshadow PR has been merged, I will be updating the mshadow SHA used by this PR shortly, after which this PR will be ready for merging. Should be an easy approval process as this PR only introduces a test. |
@larroy This PR is ready for your final review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Sorry, having a look now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LVGTM
@mxnet-label-bot update [Operator, pr-awaiting-merge] |
) * Added test showing float32->float16 discrepancy when mshadow float2half() is used. * Temp update mshadow submodule SHA to point to PR368 (b211cb7). * Temp switch to url = https://github.com/DickJC123/mshadow.git * Updata mshadow submodule SHA. * Improve code style per reviewer comments. * Move back to dmlc/mshadow.git, now with float->half rounding. * Expand test_operator.py:test_cast_float32_to_float16 to test np.nan.
) * Added test showing float32->float16 discrepancy when mshadow float2half() is used. * Temp update mshadow submodule SHA to point to PR368 (b211cb7). * Temp switch to url = https://github.com/DickJC123/mshadow.git * Updata mshadow submodule SHA. * Improve code style per reviewer comments. * Move back to dmlc/mshadow.git, now with float->half rounding. * Expand test_operator.py:test_cast_float32_to_float16 to test np.nan.
) * Added test showing float32->float16 discrepancy when mshadow float2half() is used. * Temp update mshadow submodule SHA to point to PR368 (b211cb7). * Temp switch to url = https://github.com/DickJC123/mshadow.git * Updata mshadow submodule SHA. * Improve code style per reviewer comments. * Move back to dmlc/mshadow.git, now with float->half rounding. * Expand test_operator.py:test_cast_float32_to_float16 to test np.nan.
Description
While trying to get all the CI runners to pass for PR #13749, I discovered that the handling of the float32->float16 cast on the CPU varies based on whether the f16c library is available and enabled. If the f16c library is not available, as is the case for the Windows CI runner using a MSVC++ compiler, then the mshadow float2half() routine is used and the cast is performed by truncating the bits that don't fit in the float16 representation. The _cvtss_sh(data, 0) call employed by mshadow when the f16c library is present performs a round-to-nearest conversion, with ties rounded to the value with a 0 LSB. This round-to-nearest-even policy also is employed by the default GPU context implementation and numpy.
In order to improve MXNet model and CI consistency across all backends, I'm correcting the mshadow float2half() implementation to perform matching round-to-nearest-even rounding. The first commit introduces only a test that demonstrates the problem, which I am expecting to fail on the Windows CI runner. Then I'll work up an mshadow PR with the new float2half() routine. Next, I'll change the mshadow commit used by MXNet to point to this PR, to demonstrate its effectiveness. Once the mshadow PR has been accepted, I'll change the mshadow commit used by MXNet once again so the PR can be accepted.
This PR should only effect models run without the f16c library on a CPU. I intend to make the round-to-nearest-even behavior the new default for these scenarios, bringing them in line with other systems. I'll provide a simple build flag to restore the legacy behavior. My new float2half() implementation is 50% faster on the cpu despite the additional rounding logic.
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes
Comments