Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The author assumes throughout the article uniformly random memory access. This is the worst case for a cache. If you're going to get down to the nuts and bolts of your implementation and come out of the high world of mathematics and Big O then you cannot just consider one element, namely, in this case caching. You should also consider your access pattern which very likely is not uniformly random and therefore probably does not fit the authors analysis. In fact, the only reason caching works is because the authors premise is generally wrong.


The author does address this point in part three of the series where he compares access times to a small area of a larger memory region. Even with absolutely sequential access to an array of size K, you first have to find this particular array in your larger memory region of size N, giving you total cost to iterate over the full array as O(sqrt(N) + K).

I am not sure how costly it is to iterate through your entire memory (assuming a no-op), but I would argue that eventually you reach some bottleneck and end up at O(sqrt(N)+N), too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: