0
Can anyone explain this code
// #include<stdio.h> int main() { int var1= 4/9; printf("%d\n",var1); float var2=4/9; printf("%.2f\n",var2); float var3=4.0/9.0; printf("%.2f\n",var3); return 0; }
2 Réponses
+ 3
JAGAN GANIREDDY
Dividing float(or double) by float(or double or int ) will result float like
4.0/9.0 =0.44444// float by float
4.0/9 =0.44444 //float by int
4/9.0=0.44444 // int by float
but if we divide int by int the compiler will ignore the decimal part like this
4/9 = 0 not 0.44444
5/2 = 2 not 2.5
printf("d%",var1);// 0
printf("0.2f%",var2);// 0.0
printf("0.2f%",var3);// 0.44 as we use 0.2% this will print 2 values after the decimal point
+ 1
Mathematically,
4/9 ≈ 0.4444444
In C Language,
4/9 = 0
4/9.0 ≈ 0.444444
4.0/9 ≈ 0.444444
4.0/9.0 ≈ 0.444444
(float) 4/9 ≈ 0.444444
(int) 4.0/9 = 0
(int) 4/9.0 = 0
So, in 'float var2 = 4/9;' the value will be 0.0 as 4/9 is 0 but in 'float var3 = 4.0/9.0' the value will be 0.44