language agnostic - Is floating point math broken? -


0.1 + 0.2 == 0.3 -> false 
0.1 + 0.2 -> 0.30000000000000004 

why happen?

binary floating point math this. in programming languages, based on ieee 754 standard. javascript uses 64-bit floating point representation, same java's double. crux of problem numbers represented in format whole number times power of two; rational numbers (such 0.1, 1/10) denominator not power of 2 cannot represented.

for 0.1 in standard binary64 format, representation can written as

  • 0.1000000000000000055511151231257827021181583404541015625 in decimal, or
  • 0x1.999999999999ap-4 in c99 hexfloat notation.

in contrast, rational number 0.1, 1/10, can written as

  • 0.1 in decimal, or
  • 0x1.99999999999999...p-4 in analogue of c99 hexfloat notation, ... represents unending sequence of 9's.

the constants 0.2 , 0.3 in program approximations true values. happens closest double 0.2 larger rational number 0.2 closest double 0.3 smaller rational number 0.3. sum of 0.1 , 0.2 winds being larger rational number 0.3 , hence disagreeing constant in code.

a comprehensive treatment of floating-point arithmetic issues what every computer scientist should know floating-point arithmetic. easier-to-digest explanation, see floating-point-gui.de.


Comments

Popular posts from this blog

android - InAppBilling registering BroadcastReceiver in AndroidManifest -

python Tkinter Capturing keyboard events save as one single string -

sql server - Why does Linq-to-SQL add unnecessary COUNT()? -