Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The author argues that a random access to memory is not O(1) but instead O(root N) because of distance.

The easy reactive response is that with respect to algorithm design the size of RAM, N, is a constant.

On the other hand for very high scaling factors, as input size rises the size of RAM must also rise. In this way N can be thought of as a variable and that seems to be what the author is thinking. Different algorithms will behave differently as they are scaled to infinity and beyond.

I think the author's argument is interesting but maybe it's better to make new models for time complexity analysis. I think Bob Harper's students have done good work on this.

In addition to distance there is also the cost of selection, namely the muxes and decoders, which would multiply the cost of access by log N.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: