BCD makes stored value -> human-readable trivial, at the cost of complicating math on those values.
So it's useful for applications where you're mostly doing human input/output <-> stored value.
But as soon as you do any non-trivial math on those values, using (fixed point?) integers wins. At the cost of a simple stored value <-> human readable conversion.
I'd think most financial applications fall into the category "do math, so integers win over BCD".
I honestly don't understand the argument here or the parent comment makes.
Why is a BCD decimal 128 worse at math then a fixed point integer? You are saying it is more CPU efficient? Are you saying some operations with fixed point integer math operations are more accurate then dec128?
I've seen this asserted several times, both in the post and in comments, but I've never seen a single concrete example of it being better. Can someone provide an example?
What I was asking for is, when they said it was just "better", are they specifically saying "CPU computations is a bottleneck, thus BCD is not as good as fixed point integers"? Which is fine if it is, I just would like that to be stated clearly. In my line of work, BCD CPU is NEVER the bottleneck, it never will be, and it is likely that the time it takes for a CPU to compute the BCD operation it will still be stalling on pref etching the next instruction from main memory anyway.
But maybe, for their specific ledger specific database, it is better. If so, show the benchmark, and how it impacted their specific code. But don't expect me to just take fewer CPU instructions for math operations to directly translate to more desirable.
While IEEE 754 does have curiosities, the 1/10 problem the article points out isn't really addressed by anything mentioned in the article (or here). BCDs have exactly the same problem with e.g. 1/3.
What you really want is a rational (fractional value: numerator and denominator) of some form.