The stated moral of the story (last line of the article): "Brains win again over brawn: a well-designed, mathematically knowledgeable algorithm beats brute force!"
This reminds me of a claim I heard from a numerical analysis colleague: "Algorithms of today running on hardware from the 1950's would beat algorithms of the 1950's running on hardware of today."
It was a thought-provoking comment. It would be less hyperbolic if prefaced with "There exist some problems of practical significance for which.."
The question is, what might be some examples? What are some problems for which the time or space complexity of the best known solution has decreased over the last several decades to the point that the above statement would be true of them?
The Fast Fourier Transform (1965) is one very important algorithm that comes to mind, speeding up Fourier Transforms from O(N^2) to O(N log N). Quicksort (published 1961) might be another example, but since sorts in the 1950s usually ran on tape drives, it's hard to compare.
On the whole, computers of the 1950s were extremely memory-limited, so many modern algorithms aren't entirely relevant. For example, the IBM 7090 (1959) was a large-scale scientific computer and had an address space of 32K words. As for performance, it did 39,500 multiplications per second. A modern supercomputer is trillions of times faster; that makes up for a lot of bad algorithms.
Most modern problems are going to have a hard time fitting in a 1950s computer, regardless of the algorithm. On the whole, I think you'd be better off with 1950s algorithms on a modern computer than vice versa. On the other hand, modern cryptographic algorithms on old hardware would be infinitely better than old algorithms on modern hardware.
I just want to add, because some people might find it interesting, that a version of the fast Fourier transform was already known to Gauss (though not published in his lifetime). More info can be found in the article "Gauss and the history of the fast Fourier transform" by T. Michael.
Merge sorts are typically used when sorting data on magnetic tapes. Polyphase merge was popular. All merge sorts are O(N log(N)). They were ideal for tapes as they read and write data sequentially.
As you point out, the sorting of arrays and sequential access files (tapes) have different algorithms.
the cost of access to a tape record is not negligible and it is the main cost. But for arrays/memory it's almost 0, the weight of memory access.
Niklaus Wirth describes this in the classic book "(Algorithms + Data Structures = Programs)". There is a full chapter comparing the best/normal/worst case for each approach.
Those were "Elegant weapons for a more civilized Age.".
> The question is, what might be some examples? What are some problems for which the time or space complexity of the best known solution has decreased over the last several decades to the point that the above statement would be true of them?
All trigonometric functions.
It is a common source of confusion that none of the modern SIMD instruction sets for x86 contain instructions for sine or cosine. The only instructions for them in the CPU are the FCOS and FSIN ones for the obsolete x87 FP mode. These should never be used, because the software implementations provided by your C library are both faster and more precise.
The reason the CPU instructions are worse than what you can write is that back when they were added to the CPU, the fastest known way of computing full-precision trigonometric functions in hardware was not fast enough for practical use. So instead, the instructions were implemented as approximations. Then, after some improvements in known algorithms, these approximations were improved to be somewhat better and faster. However, this caused compatibility issues because FSIN on a Pentium (or a 486? not sure) does not actually return same results as it does on a 386. Learning from this, the trigonometric functions in x87 are now forever frozen to their old, slow to compute and imprecise approximations.
However, since then a much more accurate and faster way to compute sine and cosine in software has been discovered, and every halfway decent language and math library now uses it.
See e.g. https://www.johndcook.com/blog/2015/12/08/algorithms-vs-moor... — “a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988 […] in 2003 this same model could be solved in roughly 1 minute […] a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms”
To me, that "moral of the story" is a bit backward. He most likely achieved his goal in the most efficient way, i.e. shortest possible time, by throwing hardware at the problem. Going looking for a better algorithm and implementing it might have taken longer and would have been riskier, given that he didn't know if such an algorithm existed.
This reminds me of a claim I heard from a numerical analysis colleague: "Algorithms of today running on hardware from the 1950's would beat algorithms of the 1950's running on hardware of today."
It was a thought-provoking comment. It would be less hyperbolic if prefaced with "There exist some problems of practical significance for which.."
The question is, what might be some examples? What are some problems for which the time or space complexity of the best known solution has decreased over the last several decades to the point that the above statement would be true of them?