+ 2
Why the value of any float was changed after store in a variable .. in all languages
#include <stdio.h> int main() { float a=.1; printf("%.17f",a); // here .1=.10000002983342983 printf ("%.17f",.1);//here .1 =.100000000000001 return 0; }
1 Respuesta
+ 3
A floating point literal such as that .1 you used as second argument in the second printf() call, is expressed as `double` type.
Precision difference between `float` and `double` is the reason why you're seeing those different outputs. Variable <a> is of `float` type, the literal .1 is of `double` type.
You either define <a> as `double`, which means <a> and literal .1 are of the same type. Or you can explicitly specify the literal .1 as a `float` type by adding 'f' suffix to its value as follows ...
printf( "%.17f", .1f ); // <-- notice the 'f' suffix