Andy Heckler of Ohio State's Physics Education Research group is visiting us here at K-State, and his main talk today was about stuff that's old news in psychology research (some from 1927, other bits from the early 1970s) but which I had not heard of before...and it's potentially of interest to y'all.


Okay, the basic idea here is that for any outcome O, there's going to be multiple things that could plausibly cause it. Let's call two such "cues" A and B. Furthermore, let A be something that jumps out at you (the word "salient" comes from the Latin for "to jump"), while B is more subtle. For instance, a basketball team has one primadonna star player, but also has a good coaching program that ingrains teamwork. Both A and B could be the cause of victory O, but we call A the "salient" cue.

In cases where both A and B are obviously present (like watching the whole game), A does what Pavlov called "overshadowing". People tend to remember the hotshot player more than they do the good teamwork. Similarly, in work done by people you've likely never heard of about 40 years ago, something called "blocking" can happen if you first see only A resulting in O (like seeing just a highlight segment on the news) and later see both A and B resulting in O. In both overshadowing and blocking, the salient cue A is more likely to be remembered as a cause of the victory. Blocking can even occur if A is not significantly more salient than B (like watching the game and noticing some truly astounding displays of teamwork, in addition to the hotshot's performance).

Okay, that's the definitions.

Now, here's where IMO it gets interesting. Both overshadowing and blocking tend to cause people not merely to miss the less flashy "B" cue, they actually train us to ignore less salient things, so that if we see them later in isolation we won't even notice them! To extend my basketball metaphor, if the hotshot were out injured and the team still won, people would be less likely to recognize the role of teamwork than if there never had been a hotshot player. Even if the hotshot's actual effect on win-loss records is negligible!

Overshadowing seems to be stronger at this than blocking, in part because you could consider blocking to be a situation where you have really strong overshadowing only part of the time, and weak or no overshadowing the rest of the time. Of course, you can get both effects in play at once for even more oomph.

Now, once you realize that a flashy but ultimately incorrect explanation for something is going to override any attempts to teach a correct explanation, the question becomes how to break this effect. Explicitly telling people, "No, this is the important bit!" does mean that if you ask them what's the important bit they will tell you the desired answer...but when actually working exercises they will continue to ignore it. They'll even (as seen in exit interviews) consider the exercises to be stupid because B is obviously important, even though they used A!

There's two other strategies out there, though, that do seem to work. The "counter-example" strategy runs people through examples in which "A" clearly doesn't work (i.e. showing them all sorts of games where a hotshot player on a team with poor teamwork loses a lot), while the "induction" strategy shows examples of how "B" clearly does work (showing them games both with and without hotshot players, but with good teamwork, where the teams win).

The research Heckler's currently working on seems to indicate that the two strategies work better in different situations. Where there's a high difference in salience between the options, counter-example works better: prove that the obvious solution fails, and they will notice the inobvious ones. Where there's a low difference in salience, counter-example doesn't work as well...there's two possible solutions as far as the learner is concerned, and showing that one doesn't always work doesn't mean that the other one has to be the only option. In that case, induction helps reinforce the correct option and show it's the better choice.

At its core, it has to do with how likely someone is to get a wrong answer by holding the incorrect view. Counter-example gives a high error rate when someone strongly holds to a salient incorrect view, but a low error rate when someone's willing to use either possibility. Induction has a moderate error rate regardless of the difference in salience, so when the salience difference is low, it gives more errors than counter-example would in that situation.

So, to summarize...if the wrong explanation looks like it's the only thing someone is willing to entertain, you have to use "negative campaigning" to demolish it. If the wrong explanation is merely one of several possibilities someone thinks is involved, you're better off running a clean campaign and building up the right answer. And in all cases, making mistakes really is the best way to learn...the more often you get feedback of the "that's wrong" variety, the more quickly you shift your attention from the wrong view to the right one. To a point, anyway. Too much error and you can discourage someone.
.

Profile

dvandom: (Default)
dvandom

Most Popular Tags

Powered by Dreamwidth Studios

Style Credit

Expand Cut Tags

No cut tags