Why are Javacript number usually precise until an operation is performed, and is it always like this? [duplicate]

4 days ago 6
ARTICLE AD BOX

this is a quirk of IEEE 574 floating point numbers. Basically, these values cannot be represented exactly. 0.1 would be something close to 0.10000000000000000555, and 0.2 would be 0.20000000000000001110. Obviously adding the long decimal numbers doesn't give precisely 0.3, it's an approximation of 0.3 (this is the big tradeoff of floats).

When you write 0.3 in code, you get an approximate 0.30000000000000004441. Unfortunately, if we go off by our approximations above, the sum of the real stored values for 0.1 and 0.2 would not be exactly equal to the approximation of 0.3, so it's false. If you find a way to sum a + b in a way that equals the real value of 0.3, you would get true.

Dimitar Bogdanov's user avatar

2 Comments

This is a direct duplicate of a 17 year old question. Vote to close.

2026-01-22T23:07:47.327Z+00:00

Okay, so 0.3 written as a literal gets rounded to approximately 0.3, but when adding two values, the sum of the errors exceeds a tolerable amount, and makes it greater than 0.3 when represented as a float. That's good to know and fairly intuitive, but my question is then: how do I know when a number will be represented accurately or not, when there is no addition, multiplication, etc, applied to it, and is simply defined as a literal? I could not find an answer to this in the "duplicate" questions.... Ah, I think I found it....

2026-01-22T23:56:20.4Z+00:00

Read Entire Article