+ 2
changing the precision in Python
The default precision for decimal is 28. I have changed it to 17 and 30 but it has not affected my result. Why? >>> from decimal import * >>> getcontext() Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[InvalidOperation, DivisionByZero, Overflow]) >>> x=1.1+2.2 >>> x 3.3000000000000003 >>> Decimal(x) Decimal('3.300000000000000266453525910037569701671600341796875') >>> getcontext().prec=17 >>> Decimal(x) Decimal('3.300000000000000266453525910037569701671600341796875') >>> getcontext().prec=30 >>> Decimal(x) Decimal('3.300000000000000266453525910037569701671600341796875')
3 odpowiedzi
+ 7
Quoting the docs again, "The significance of a new Decimal is determined solely by the number of digits input. Context precision and rounding only come into play during arithmetic operations."
Try Decimal(1)/Decimal(13) with different precisions, and you'll see the difference.
+ 8
When you declare x=1.1+2.2, it becomes a float whose binary approximation stored in the computer looks like 3.300000000000000266453525910037569701671600341796875. Then when you try to convert it into Decimal, it just keeps the entire binary approximation.
From the docs: "If value is a float, the binary floating point value is losslessly converted to its exact decimal equivalent. This conversion can often require 53 or more digits of precision...
"The context precision does not affect how many digits are stored. That is determined exclusively by the number of digits in value. For example, Decimal('3.00000') records all five zeros even if the context precision is only three."
https://docs.python.org/3/library/decimal.html#decimal.getcontext
The way I see it, it's kinda pointless to convert a float to a Decimal.
0
Kishalaya Saha thanks ! But then what is the use of getcontext().prec=17 or any number. Does it affect the precision or not ?