+ 3
decimal vs double
What is the difference between a double and a decimal. Both can hold floating point numbers double 5.0 à 10^-324 to 1.7 à 10^308 decimal ±1.0 x 10^28 to ±7.9228 x 10^28 They say that decimal is the more precise one. But if a double can hold a value with 10^-324, that should be the one that can hold the biggest and smallest numbers.
1 Answer
+ 2
Obviously we can't store say 100.000 digits of pi in a double because a double is only 64 bits wide and there simply isn't enough space.
So no matter how you store floats, there will always be gaps between numbers. In school we learn that you can always find another number between any two numbers but that's not the case with floats. There is a precision limit!
The decimal type gives you 28 digits to play with and you can put the decimal comma anywhere you want.
Doubles are interesting in that the gap between two floats increases as your number gets bigger. Close to 0, doubles are very accurate, and when you get to ±10^200, the next possible double may be 20.000 or more away.
Decimals give you good precision everywhere, the tradeoff is that the range of numbers you can represent is small.
Doubles have larger gaps between numbers so they can cover a broader range with a small memory footprint.
(Decimals need twice as much memory and are more accurate, even close to 0.)