Watching that, it amazes me how many people don't get that. Not long ago there was an article on HN about this very experiment (from NY Times, IIRC).
Now I can imagine, if you're not a software-tester by heart, that you'd maybe try to "win" the first or two tries, out of habit. I'm not a professional tester, just a coder, and reading the aforementioned article, I also first tried 3/6/12 and then some arbitrary increasing sequence, before catching myself and realizing if I want to figure out the rule I need some negative examples.
Is this something you need to be a programmer to realize?
Or maybe you need to have a "scientific" mind or something?
I'm going to try this on some of my friends, see if I can figure it out.
I expect a big part of it to be some psychological barrier against "failing" or getting "no" as an answer, especially if you're being put on the spot about some math-related puzzle. Many people feel uncomfortable with maths and perhaps fear giving a "wrong" answer and appearing "stupid".
But "fear of maths" isn't what I'd be testing for, it's "willingness/realization to try an experiment with an (expected) negative outcome".
So I'll make sure to formulate the problem in an appropriate manner, just like the guy in the video did, that the goal is to figure out the rule, not to finish the sequence. Try to make them comfortable, but not as far as saying "It's okay to ask me about a sequence that does not follow my rule", because that would obviously bias the experiment.
Though I wonder now, there's some really interesting studies to be done (probably many already have been done), what if the puzzle is about some other rule for a sequence of three, that doesn't have anything to do with numbers?
Say the rule is "objects of increasing size", but the (similarly misleading) example is bicycle / car / train.
I would think the phenomenon is basically equivalent to confirmation bias. A hypothesis is formed immediately upon seeing the question. Then, only further information which confirms their hypothesis is tested.
Testing the negative is something that should be fairly routine in a scientific-testing mindset. For instance, medical experiments testing both placebos (negative) and medicine (positive) and looking for differences.
I think it's an implicit assumption that a big complicated rule will be very picky in what it accepts, meaning hits are rare and will reveal more information than misses. They think all the complications are on the input side, so they're trying to pick interesting inputs instead of interesting outputs.
Come to think of it, I might be making this mistake but for non-boolean functions. I try to always have tests that pass in interesting values, but maybe I should consciously also try to have tests that get back interesting values. For example, when testing a classifier I should make sure all the classifications are possible. Or, when testing some mathematical function implementation, I should have tests with inputs that cause the result to be exactly -1, 0, 1, and 2 (if applicable).
Now I can imagine, if you're not a software-tester by heart, that you'd maybe try to "win" the first or two tries, out of habit. I'm not a professional tester, just a coder, and reading the aforementioned article, I also first tried 3/6/12 and then some arbitrary increasing sequence, before catching myself and realizing if I want to figure out the rule I need some negative examples.
Is this something you need to be a programmer to realize?
Or maybe you need to have a "scientific" mind or something?
I'm going to try this on some of my friends, see if I can figure it out.
I expect a big part of it to be some psychological barrier against "failing" or getting "no" as an answer, especially if you're being put on the spot about some math-related puzzle. Many people feel uncomfortable with maths and perhaps fear giving a "wrong" answer and appearing "stupid".
But "fear of maths" isn't what I'd be testing for, it's "willingness/realization to try an experiment with an (expected) negative outcome".
So I'll make sure to formulate the problem in an appropriate manner, just like the guy in the video did, that the goal is to figure out the rule, not to finish the sequence. Try to make them comfortable, but not as far as saying "It's okay to ask me about a sequence that does not follow my rule", because that would obviously bias the experiment.
Though I wonder now, there's some really interesting studies to be done (probably many already have been done), what if the puzzle is about some other rule for a sequence of three, that doesn't have anything to do with numbers?
Say the rule is "objects of increasing size", but the (similarly misleading) example is bicycle / car / train.