This is looking at a correlation among people who were hired by Google, which is a fairly specific population, and one that's actually in part selected by the same variable being studied, which complicates things further.
There are a lot of possible mechanisms by which the correlation might've been produced, if it's valid. One is that winning programming competitions is anti-predictive of job performance (the interpretation this summary takes). But another is that Google puts (or previously put) too much positive weight on winning programming competitions in their hiring vs. other factors. If Google were, for example, more willing to overlook other weaknesses in people who had won programming competitions, or treated them more leniently in interviews, that would be another mechanism for producing a population of hires where those who won programming competitions were worse at their jobs.
I see what you're saying. The programming contest winner will do better on the algorithm brainteaser questions, so is more likely to get hired, even if he's lacking the ability to do better after he's hired.
A 30 minute interview question is more like a programming contest question than a serious project.
It sounds his conclusion is measuring a defect in their hiring process more than "programming contest winners tend to make worse employees".
There are a lot of possible mechanisms by which the correlation might've been produced, if it's valid. One is that winning programming competitions is anti-predictive of job performance (the interpretation this summary takes). But another is that Google puts (or previously put) too much positive weight on winning programming competitions in their hiring vs. other factors. If Google were, for example, more willing to overlook other weaknesses in people who had won programming competitions, or treated them more leniently in interviews, that would be another mechanism for producing a population of hires where those who won programming competitions were worse at their jobs.