You can make the same arguments against fixed precision decimal types. My systems represent currencies to 4 decimal places. At that level of precision, rounding/order of operations errors could accumulate much faster than with a 64 bit float.
Decimals are still the way to go, you just have to pick a level of precision acceptable for your application.
My management definitely does not want me spending my time chasing errors over fractions of a pennies. The only time those errors are discovered is when I compare the output of new code against old code.
Decimals are still the way to go, you just have to pick a level of precision acceptable for your application.
My management definitely does not want me spending my time chasing errors over fractions of a pennies. The only time those errors are discovered is when I compare the output of new code against old code.