+ 1

Python error when attempting to multiply by 1.1

I'm trying to code a simple mathematical problem in Python however my answers are returning incorrect as there is an additional unexpecting decimal of 0.0000000001 being added, can anyone explain what is causing this and how to avoid it happening? Please see my code below: https://code.sololearn.com/cJ04DP54WvG3/?ref=app (edit: this error only seems to occur with certain numbers for example 2 where as others return the expected outcome)

12th Apr 2020, 9:37 PM
Isaac Nixon
Isaac Nixon - avatar
4 Respuestas
+ 2
print(f"{a:.2f}") # change the '.2f' to the number of decimal places you need. or use:- print("{:.2f}".format(a))
12th Apr 2020, 10:00 PM
rodwynnejones
rodwynnejones - avatar
+ 2
The gist of it is that floating-point numbers (the ones with a decimal dot) are only stored as an approximation inside your PC. It's obvious if you think about it, infinite precision would require infinite memory! An unfortunate side-effect are these floating-point maths errors. Which is why banks don't use floats for transferring money for example, though for most usecases they are fine. Sololearn has lessons on the topic and the internet has plenty of resources too, check it out, it's a surprisingly deep topic. (IEEE-754 is the technical name for the floats we use in code)
12th Apr 2020, 11:21 PM
Schindlabua
Schindlabua - avatar
+ 1
Thanks but I am more interested as to why I am getting the addition decimal for example if I put 2 into my formula it should return 55 not 55.0000000001
12th Apr 2020, 10:42 PM
Isaac Nixon
Isaac Nixon - avatar
+ 1
Thank you that's helped me a great deal I was not aware of floating point numbers
13th Apr 2020, 12:27 AM
Isaac Nixon
Isaac Nixon - avatar