+ 9
[Solved]Why the output is 0.30000000000000004?
On paper and calculator it's, 0.1+0.2=0.3 Is there any fault in SoloLearn playground, but some shows the correct output https://code.sololearn.com/cElRD4t2ORQA/?ref=app https://code.sololearn.com/c0u5hcBVO7A3/?ref=app https://code.sololearn.com/cestIXPmG14E/?ref=app https://code.sololearn.com/WQEW8tzkH4XK/?ref=app https://code.sololearn.com/crLL198dV1n2/?ref=app
6 Answers
+ 10
Floating Point Math
⢠https://0.30000000000000004.com/
Math.js
⢠https://mathjs.org/docs/datatypes/numbers.html
Why
0.1 + 0.2 === 0.30000000000000004:
Implementing IEEE 754 in JS
⢠https://javascriptweekly.com/link/77414/83ac73f344
+ 8
FOLLOW ME
That's because .1 cannot be represented exactly in a binary floating point representation. If you try
>>> .1
Python will respond with .1 because it only prints up to a certain precision, but there's already a small round-off error. The same happens with .3, but when you issue
>>> .2 + .1 0.30000000000000004
then the round-off errors in .2 and .1 accumulate. Also note:
>>> .2 + .1 == .3 False
+ 6
This is not an exclusive computer related phenomenon. Think about our decimal system: You can easily convert fractions like 1/2, 1/4 or 1/5 in finite decimal numbers (0.5, 0.25 or 0.2). But you always have an inaccuracy, if you want to represent fractions like 1/3, 1/6, 1/7 or 1/9. This results from our number system which has 10 as base.
Computers work binary, they use 2 as base. Hence fractions like 1/2, 1/4, 1/8, 16, ... can be represented accurate. Fractions like 1/3, 1/5, 1/6 or 1/10 always have an inaccuracy, because the the number of positions after the decimal point is limited.
+ 4
If I use either float or double(long float) precision in the format string of a printf in C I don't get the same issue. printf doesn't seem to go down to too many decimal points.
https://code.sololearn.com/cxVx1oB459x8/?ref=app