How to select half precision (BFLOAT16 vs FLOAT16) for your trained model?

Issue

how will you decide what precision works best for your inference model? Both BF16 and F16 takes two bytes but they use different number of bits for fraction and exponent.

Range will be different but I am trying to understand why one chose one over other.

Thank you

    |--------+------+----------+----------|
    | Format | Bits | Exponent | Fraction |
    |--------+------+----------+----------|
    | FP32   |   32 |        8 |       23 |
    | FP16   |   16 |        5 |       10 |
    | BF16   |   16 |        8 |        7 |
    |--------+------+----------+----------|

Range
bfloat16: ~1.18e-38 … ~3.40e38 with 3 significant decimal digits.
float16:  ~5.96e−8 (6.10e−5) … 65504 with 4 significant decimal digits precision.

Solution

bfloat16 is generally easier to use, because it works as a drop-in replacement for float32. If your code doesn’t create nan/inf numbers or turn a non-0 into a 0 with float32, then it shouldn’t do it with bfloat16 either, roughly speaking. So, if your hardware supports it, I’d pick that.

Check out AMP if you choose float16.

Answered By – MWB

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published