Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I honestly don't understand the argument here or the parent comment makes.

Why is a BCD decimal 128 worse at math then a fixed point integer? You are saying it is more CPU efficient? Are you saying some operations with fixed point integer math operations are more accurate then dec128?

I've seen this asserted several times, both in the post and in comments, but I've never seen a single concrete example of it being better. Can someone provide an example?



Most CPUs don't have BCD math instructions, so you need multiple instructions, with probably around a 10-100x slowdown in math.


Do you know what BCD is? If you did, it becomes pretty obvious why BCD is significantly slower than fixed point integers.


I know what BCD is, I know they are slower.

What I was asking for is, when they said it was just "better", are they specifically saying "CPU computations is a bottleneck, thus BCD is not as good as fixed point integers"? Which is fine if it is, I just would like that to be stated clearly. In my line of work, BCD CPU is NEVER the bottleneck, it never will be, and it is likely that the time it takes for a CPU to compute the BCD operation it will still be stalling on pref etching the next instruction from main memory anyway.

But maybe, for their specific ledger specific database, it is better. If so, show the benchmark, and how it impacted their specific code. But don't expect me to just take fewer CPU instructions for math operations to directly translate to more desirable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: